Académique Documents
Professionnel Documents
Culture Documents
OLAP
Contents
OLAP fundamentals
Evaluations
Applix TM1
Brio Technology Brio Enterprise
Business Objects BusinessObjects
Cognos PowerPlay
Gentia Software Gentia Millenium Applications Platform
Hummingbird BI/Suite
Hyperion Solutions Hyperion Essbase
Information Advantage DecisionSuite
Microsoft SQL Server 7.0 OLAP Services
Microstrategy DSS Product Suite
Oracle Oracle Express Development Suite
Pilot Software Pilot Decision Support Suite
SAP AG SAP Business Information Warehouse
Seagate Seagate Holos
Sterling Software Eureka:Suite
WhiteLight Systems Whitelight Analytic Application Server
Using OLAP to
make better decisions
Overview ............................................................................................................................. 2
What does an OLAP tool do? ............................................................................................. 3
Ovum definition of OLAP .................................................................................................... 5
The uses of OLAP ............................................................................................................... 6
Overview
Decision making is at the heart of running a business. Whatever the depart-
mental function or level of management, there are decisions to be made.
Decisions range from operational issues requiring immediate resolution to
longer term strategic issues. At the heart of decision making is access to
quality information, meaning that it is correct, complete, timely and consist-
ent. It is generally implied (rather than explicitly stated) that this informa-
tion must also be in an accessible form.
Online application processing (OLAP) is an important technology for organi-
sations looking for better ways to access and analyse information. OLAP can
enable organisations to improve their analysis of performance indicators,
manage their customer relationships more efficiently and support critical
parts of the manufacturing process.
Given the current corporate climate, all decision makers need to know about,
and exploit, this technology. Improving decision making with tool support is
not an option, it is an imperative.
What does an OLAP tool do?
Online application processing (OLAP) is the interactive analysis of business
information. End users can explore important business measures (such as
profits, sales and costs) along many different ‘dimensions’. With an OLAP
tool, the user moves seamlessly from one perspective on the business (‘an-
nual sales for all stores’) to another (‘the most profitable stores over the last
three months’) and drills between different levels of detail (sales by day,
week or quarter). This interactive exploration of information is commonly
referred to as multidimensional analysis. The common factor defining all
OLAP tools – and there are many different implementations of the core
functionality – is an analytical engine that turns corporate data into multidi-
mensional information for online analysis.
Complex decision support and tailored, easy-to-use applications with limited
functionality can be built with OLAP tools. However, OLAP tools also sup-
port applications that match the needs of a much wider range of users.
These applications are characterised by the flexibility offered to the user not
merely in terms of navigation through a multidimensional model, but also in
terms of the definition of reports and applications.
OLAP applications are characterised by a lack of fixed structure. An OLAP
tool provides an analytical environment for the power user or specialist
knowledge worker, which enables them to use a range of functions to explore
the information available. As well as core multidimensional operations such
as drilling and rotation, users can quickly define new reports and may even
have access to advanced features such as forecasting algorithms, data
mining tools or software agents.
The overlapping relationship between reporting, OLAP and data mining
tools is shown in Figure 1.
Reporting tools are aimed at a general audience, and the results are dissemi-
nated throughout the organisation. OLAP tools are specialised for the
interactive exploration of multidimensional information and are used at all
levels of the organisation. The division between the two types of tool is not
totally clear-cut however, because some reporting tools have limited facilities
to allow users to explore data, and OLAP tools benefit from some of the
features of reporting tools.
Increasingly specialised
At the other end of the spectrum, data mining tools allow users to find
patterns and explore data using less structured hypotheses. Some OLAP
tools offer limited data mining features, although the current trend is more
towards integration with data mining tools than an expansion of the func-
tionality of the OLAP tool.
Ovum definition of OLAP
Information is a resource
Thirty years of automating manual and administrative processes has gener-
ated unimaginable amounts of data. The amount of data collected has
further escalated with the widespread use of bar codes and EPOS systems
and dramatic reduction in the price/performance ratio for collecting, storing
and analysing that data. Now more than ever, exploitation of this resource is
seen as a crucial element in the armoury of any competitive strategy.
There are three major driving forces behind the desire to make better use of
information within all organisations.
The complexity of the market
The increasing volume of data is only one aspect of an increasingly complex
commercial environment. Deregulation of markets, new competitors, new
forms of relationship (with both customers and suppliers), and external
technological, social and economic changes all help to further complicate the
forecasting and planning process.
Customer focus
Competitive advantage is no longer seen in terms of price and quality alone.
Companies must be able to innovate to survive. They must understand their
customers needs and be able to meet them in an increasingly personalised
fashion. The move from mass marketing to individual marketing requires
great resources to be devoted to information collection and analysis.
Organisational change
The 1990s have seen major organisational changes on a worldwide scale.
Business process re-engineering has led to a drastic thinning of middle
management ranks and a new emphasis on flatter, more flexible organisa-
tion structures. The re-engineered organisation requires information to be
available to those who need it to make the most effective decisions at the
most effective time.
Overview ............................................................................................................................. 2
The Ovum evaluation framework for OLAP ........................................................................ 3
Important considerations when constructing a profile of OLAP requirements .................... 4
Questions that need to be answered .................................................................................. 6
Overview
Unfortunately there is no universally ‘best’ tool that magically adjusts to the
needs of your users, connects to the available data sources, requires zero
maintenance, scales without limit and came free with Cornflakes. While we
wait for this to happen, we need to acknowledge that tools and organisa-
tional needs are very diverse. Effective use of OLAP results from getting the
best fit between the two.
Choosing an OLAP solution is a multi-faceted decision and the starting point
is to consider your requirements. In this section, we outline the important
issues that influence this. One consequence of this diversity is that you may
require several tools to meet the interactive decision support requirements
in your organisation. You may also require several tools if you adopt a ‘best
of breed’ approach to creating an OLAP system. We describe seven questions
that need to be addressed to establish your requirements profile.
To enable you to use your profile of needs with our evaluations, we summa-
rise the evaluation framework and show in a summary chart how your
needs can be matched to the right tool using our evaluation framework.
The Ovum evaluation framework for OLAP
The aim of the Ovum evaluation framework is to provide a comprehensive
means of describing OLAP tools. The framework covers the totality of OLAP
functionality, so none of the tools we have evaluated offer all the features in
all the categories. Indeed, even if a tool did, it would not necessarily make
the ideal tool for every user. Many users would be paying for unused func-
tionality.
In our evaluations, we describe the components of each toolset, the architec-
tural configurations supported and describe the support provided for all
aspects of OLAP use. A full description is given in the Guide to the Evalua-
tions; here, the eight perspectives of OLAP functionality are briefly summa-
rised.
End-user functionality
How easy is it for casual users to find and use a previously created model?
We also consider support for report distribution and subscription.
Building the business model
Does the tool enable the model builder to build a complex multidimensional
business model?
Advanced analytical power
What in-built support does the tool provide for complex analytics?
Web support
Can the tool be used to access and create models via the Web?
Management
How easy is it to manage the models, persistent data and users?
Adaptability
How does the tool ensure that the data sources, models, reports derived from
these and metadata are all synchronised?
Performance tunability
What are the tuning options?
Customisation
What support is available to customise and develop applications?
In the following pages, we describe the major issues that need to be
considered when choosing an OLAP tool, and how these relate to these
perspectives.
Important considerations when constructing a profile of OLAP
requirements
Overview
Building a profile of OLAP requirements is neither simple nor a ‘one-off’
exercise. Requirements change and your tool needs to be flexible enough to
accommodate these. In this section, we describe the major considerations
that need to be addressed when considering which tool/s to use, and we show
how to use our evaluations to pick the best fit between your requirements
and the tools available. Bear in mind that different tools can be used to
support different clusters of needs, and that requirements change over time.
Figure 1 shows the main questions that need to be addressed to decide your
organisational requirements for OLAP. Each of these points is explored in
more detail in the following pages. In the summary below, we show how the
answers to these questions relate to the information given in the
evaluations.
Performance tunability
End-user functionality
Customisation
Management
Web support
Adaptability
Complex and specialised analysis
General analysis
This is the least complex type of analysis. This type of analysis was once
delivered via hard copy scheduled business reports. Each report was a single
perspective. With OLAP, the data can now be interactively explored. In
general, this type of analysis uses what is provided and does not create extra
information. The models are pre-defined and the analysis is conducted using
the basic OLAP functionality of drill-down, pivoting and slice-and-dice.
Ad hoc analysis
This type of analysis demands more functionality than the general analysis
described above. It extends the previous requirements by requiring that the
user can enhance the model provided with the addition of extra dimensions
or new information derived from what is currently available.
The designer
The designer is responsible for creating the mapping layer (if one exists)
between the data sources and business view of the data, the multidimen-
sional business models and the reports derived from these. In some cases,
the creation of reports is also carried out by end users.
The designer role requires technological skills allied with an understanding
of business needs. This is the role most likely to be undertaken by the IT
department. Although the most visible output of this role is the creation of
models and reports, the most demanding part is likely to be the organisation
of connections to the data sources. This involves negotiating access, deter-
mining schedules and ensuring that the structure of the available data
supports the requirements of the models.
The designer needs good model building support from an OLAP tool, com-
bined with the option of providing advanced analytics for those models that
require them.
The administrator
The administrator, sometimes known as the manager, has the responsibility
for maintaining the system. One of the regular tasks is the scheduling and
maintenance of stored data. This is of particular importance if the data is
stored in a multidimensional database, but also relevant if data is cached by
ROLAP tools. In all cases, when data is uploaded an administrator must
ensure that the operation is completed satisfactorily, and deal with it if this
is not the case.
The administrator also needs to set up and manage user and model security.
If the OLAP system was static, this would be the main focus of the adminis-
trator’s role, but there is inevitably a high degree of volatility in the system
and the administrator must deal with this. Part of this is the need to tune
the system for performance gain. OLAP tools offer a range of support for
this, some to the extent of enabling the administrator to choose between
MOLAP and ROLAP type access. In general terms, the administrator seeks
to tune performance by maximising the speed of response of the system
while minimising the load time for any stored data.
The OLAP system requires that the data sources, the models, the metadata,
and reports based on them, are all kept synchronised. There will be pres-
sures from the business users to change the models and reports, and the
data sources are unlikely to stay static. The administrator of an OLAP
system has to have good organisational skills as well as tool support.
A final responsibility, although one which is outside the scope of most sys-
tems, is that of cleansing and organising the data for the multidimensional
business models. In many systems this is entirely delegated to the data
warehouse, but in some cases further data manipulation is carried out
within the OLAP tool.
Is there a need for integration with other OLAP and data warehousing tools?
If you wish to wish to adopt a best-of-breed approach to your OLAP solution,
you need to ensure that all the components work together. Ideally, they
should do more than this, and have a degree of integration that enables
metadata generated by one part of the process to be used by products from
other vendors.
Within the OLAP part of the data warehousing process, the main split is
between server and client components. This has long been possible, but its
use has been limited by the lack of a standard interface between the server
and the end-user tool. All the multidimensional databases had proprietary
interfaces, and thus end-user tool vendors had to make choices about which
of these to support. The first attempt to produce a standard API was the
OLAP Council’s MD-API, but this has not been widely adopted. More re-
cently, in February 1998, Microsoft released the OLE DB for OLAP specifica-
tion. While this is still proprietary, it is rapidly becoming the de facto stand-
ard because all the major OLAP vendors, apart from Oracle, have expressed
support for it.
The emergence of this de facto standard makes it much easier for users to
plan for an integrated best-of-breed solution. In the Deployment section of
the evaluation framework, we describe the interface standards that the tool
supports.
Another aspect to integrating components is the support given for metadata
exchange between components from different vendors. While both Microsoft
and Oracle have plans to provide a metadata repository that can be used as
the basis for sharing metadata, neither of them has yet been able to deliver
this. It is therefore left to individual vendors to develop technical integration
between their products to facilitate metadata exchange.
In the evaluation framework, in Adaptability, we describe any support the
tool offers to access metadata generated in earlier stages of the data ware-
housing process by third-party tools.
The anatomy
of an OLAP tool
Overview ............................................................................................................................. 2
Why is multidimensionality so important? ........................................................................... 4
MOLAP, ROLAP and HOLAP .............................................................................................. 9
Multidimensional OLAP ....................................................................................................... 9
OLAP architectures ........................................................................................................... 14
Overview
In this section we describe the functionality that an OLAP system provides,
and the terms and concepts that are important in understanding this. We
explain industry terms such as ‘multidimensionality’, ‘dimensions’ and
‘business measures’.
There has been much controversy about the relative merits of MOLAP,
ROLAP and HOLAP methods of storing and accessing the data required for
analysis. These terms are explained and a comparison made between the
two main alternatives of MOLAP and ROLAP. HOLAP is a combination of
both; it promises (but has not yet delivered) the best of both worlds.
There are four main OLAP architectures, differentiated by the data storage
method used and whether the processing takes place on the client, the mid-
tier server or in the relational database. In this section, the architectures
and their advantages and limitations are described. In the evaluations we
describe if (and how) each of these configurations is supported.
Multidimensional
business model
OLAP engine
Multidimensionality explained
A simple database query that lists all cars sold in February may have a use
in an operational context, but it does not a give a view of how well the
business is performing. Decision makers rely on summarised data to give
them a picture of the business at a relevant level of detail. A manager does
not base next year’s budget on a list of products sold, but rather on a sum-
mary of sales of products over the year in different categories and different
markets. A more useful view is that shown in Figure 2, with sales aggre-
gated for each model.
Robin 20 25 22
Griffin 30 21 15
Bluebird 20 60 90
North South
Robin 11 14 17 9 11 5
Griffin 18 11 8 12 10 7
Bluebird 0 30 55 20 30 35
A three-dimensional view is easily envisaged as a cube, as shown in Figure
5. It is more difficult to envisage a four- or five-dimensional model. This can
be thought of a series of inter-related cubes. Six dimensions are not uncom-
mon in a business model, although most people have problems working with
more than nine.
Business measures
Reviewing the data in Figure 4, a sales manager may want to add sales by
revenue to the query, or be able to compare planned with actual sales side-
by-side. Quantities such as ‘sales revenue’ or ‘units sold’ are called business
‘measures’. Each measure is understood in terms of the dimensions to which
it is related in a query. Geography, time and product are the three most
common dimensions in a multidimensional business model, but the specific
dimensions used vary from business to business and from department to
department. Other common dimensions include customers, departments,
promotions and suppliers. Indeed any factor that you need to track in rela-
tion to your business measures may be considered a dimension.
Aggregation hierarchies
Our data is now summarised by a number of cross-referenced dimensions.
However, these dimensional categories themselves need to be grouped and
summarised to provide clear information at a suitable level. Our sales
manager may want to break the information down to a finer level of detail
(for example, to see sales by city). Similarly, a higher level view may be
required (for example, annual or quarterly sales).
Q4
Time Q3
Q2
Q1
Robin
Griffin
Product
Bluebird
Falcon
Geography
Each level of summarised detail can be imagined (and implemented) as a
separate dimension, but this quickly becomes overly complex. It is more
common to see each dimension as having a number of different summary
levels which are relevant to the business. Together these summary levels
build up a ‘hierarchy’ through which a user can navigate. Figure 6 shows a
simple hierarchy for geography. Other hierarchies may be much more com-
plex than this. Product management, for example, requires products to be
classified under a number of different hierarchies as shown in Figure 7. This
means that, for example, sales revenues for a product can be aggregated (or
‘rolled up’) by category or by supplier, depending on the query.
Together, dimensions, measures and aggregation hierarchies form the
constituent elements of any multidimensional model.
Of course, the multidimensional table need not include summarised informa-
tion. It might, for example, include budgeted quarterly expenses for each
department. However, even here the principal benefit of multidimensionality
is the ease with which these figure are ‘rolled up’ to provide information on
budgeted expenses for the whole company over the year. In general,
multidimensionality is most relevant to data that needs to be summarised in
one form or another. A list of staff salaries is an example of data that is not
suited to multidimensional view, because there is a one-to-one correspond-
ence between each employee and their salary. Multidimensional analysis,
however, becomes useful once salaries are aggregated as wage costs for each
department.
Country
Roll-up
Production
Navigating a multidimensional model
Dimensions, measures and aggregation hierarchies are the constituent
elements of any multidimensional model. As a model becomes more complex
– involving many dimensions, several measures and various hierarchies – it
becomes more difficult to conceptualise and navigate through. OLAP tools
must make working with complex multidimensional models an intuitive and
efficient process.
Three basic operations – drilling, rotation and slicing & dicing – are required
to simplify the task of working with a multidimensional model. To enable
intuitive, interactive analysis, each operation must be simple to define and
be implemented without significant delay.
Drilling
Drilling is the ability to move up and down between different aggregation
levels. You drill-down, for example, from annual to quarterly sales figures or
drill-up from stores to regions. Drilling is usually invoked by double clicking
at the relevant point in a multidimensional table or chart.
Rotation
Rotation (also known as pivoting) allows dimensions to be viewed from any
perspective. For example, Figure 8 shows how we can rotate a three-dimen-
sional cube to show different aspects of our data. Rotation is much easier to
use in practice than it is to envisage in the mind. The user of an OLAP tool
does not need to think about cubes or rotation, they simply indicate that the
wish to see sales by quarter for the northern region. This is usually achieved
by dragging and dropping a dimension to a new position: the OLAP tool
rotates the perspective automatically.
Slicing and dicing
A user may only want to see sales figures for January, or for regions where
sales were below $100,000. The process of selecting the required data is
referred to as ‘slicing and dicing’, in reference to the necessary operations on
a multidimensional cube to pick out the required information. As well as
simple selections, OLAP tools should allow users to select specific items from
a dimension, select items by ranking (for example, the top five selling prod-
ucts) and combine selection criteria to build complex queries.
Sales for Q1 by region Rotate to view sales by Rotate to see sales for product
quarter for the North region A by region and quarter
D
Q4
Q3 C
Q2 B
Q1 A
A A North
B B South
C C West East
East
D D South West
North
North South East West Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
MOLAP, ROLAP and HOLAP
The core data for multidimensional analysis has to be stored in a structure
that provides high performance querying, scalability and multi-user access.
Relational databases are optimised for frequent, simple queries. They are
less suited to supporting complex, multidimensional queries. Many queries
cannot be handled by a single SQL query (for example, the selection of the
top five products by market share). Multidimensional queries require more
table joins and full table scans, both of which drastically reduce
performance.
Three data storage strategies can be employed that overcome the limitations
of the relational model for multidimensional analysis:
• the use of a specialised multidimensional databases (MDDBs), which
provide optimised storage and retrieval of data for OLAP queries
• the use of a data warehouse, built using relational technology but
optimised for decision support rather than transactional operations
• a combination of these approaches.
OLAP tools that provide multidimensional storage are often referred to as
MOLAP tools, while those that access data stored in relational databases are
referred to as relational OLAP or ROLAP tools. Tools that combine the two
approaches and work with data stored in a multidimensional structure as
well as data retrieved from a RDBMS are known as Hybrid OLAP tools
(HOLAP).
Multidimensional OLAP
A multidimensional database (MDDB) stores data in a series of array struc-
tures, indexed to provide optimal access time to any element in the array.
These structures can be envisaged as multidimensional cubes similar to that
shown in Figure 5. MOLAP vendors originally fell into two groups – those
using a hyper-cube architecture and those using multi-cubes. Increasingly,
the distinction is being blurred as hyper-cubes can be partitioned (effectively
becoming multi-cubes) and multi-cubes can be joined to form a virtual
hyper-cube.
Hyper-cubes
A hyper-cube architecture provides a single ‘cube’ in which each measure is cross-
referenced against all dimensions. The advocates of this approach emphasise its
simplicity and the consistency of performance, whatever dimensions or parts of a
dimension are selected in a query. However, it uses more disk space and requires
good sparsity handling if the cube’s size is not to become unmanageable.
Multi-cubes
A multi-cube architecture allows measures to be cross-referenced against selected
dimensions only. One cube may include sales revenues dimensioned by time, geogra-
phy and product. Another cube may dimension costs by department and time. This
approach uses less disk space and provides better performance for each cube;
therefore, it tends to be more scalable. However, performance may not be consistent
for queries that require access to more than one cube, and more complex processing
is required to ensure consistent results across multiple cubes.
Pre-consolidation
One of the advantages of an MDDB is that it can pre-consolidate the an-
swers to many queries. Whereas a relational database will normally need to
search all relevant records and aggregate data in order to answer a question
such as ‘how many packets of soap powder did we sell last quarter?’, an
MDDB can calculate such totals quickly as it only has to add up the cells in
the relevant rows and columns of a multidimensional array. Once calculated,
totals can be stored in the array structure. Most MDDBs have strong array
handling functions, which further speed up calculations.
MDDBs can pre-consolidate all data; so, for example, totals for each level in
each hierarchy are calculated when the database is loaded. This approach
gives very fast response times to most queries, but it requires considerable
disk space and makes the load time longer. An alternative is to consolidate
only commonly used totals and to calculate others on-the-fly. An important
optimisation decision for the MDDB administrator is which data should be
pre-consolidated and which calculated at runtime by the OLAP engine.
Sparsity
MDDBs store data collected from a series of detailed transaction records.
Sparsity is a by-product of cross-referencing those records to provide a
multidimensional model.
This is best explained through an example. A large retailer sells thousands
of products in its stores across the country. It wishes to analyse those sales
on a daily basis by store. Not all products, however, will be sold in all stores
every day. A cube created by dimensioning sales numbers by store by prod-
uct by day will, therefore, have many empty cells where no products were
sold. If only 20% of the cells in a cube are populated, then the data is said to
be 80% sparse. It is not uncommon in some application areas for data to be
more than 90% sparse.
An MDDB must have a way of dealing with sparse data to prevent storage
space being swamped by null or zero values. Each vendor has its own mecha-
nism, but in general they compress the database so that null values do not
need to be stored. Although there are some performance costs when decom-
pressing the data, in recompense access times for very sparse data are
improved because of more efficient indexing.
An MDDB must have optimisation strategies for dealing with both sparse
and dense datasets and, ideally, be able to combine these in the most effec-
tive manner. Some applications tend to generate very sparse data. Product
management applications, for example, often require analysis of many
attributes of a product (such as size, price, colour or package size). A multidi-
mensional cube that cross-references such dimensions against other dimen-
sions (such as geography and time) will tend to have a high degree of spar-
sity. Other applications, however, will not produce sparse datasets (financial
applications for budgeting and planning, for example, tend to create dense
datasets).
Sparsity also tends to prevail at certain levels of aggregation (when analys-
ing product sales by region there will be fewer ‘gaps’).
Relational OLAP
Although relational databases are not optimised for multidimensional
analysis, they do have advantages over MDDBs in other areas. In particular,
they scale to larger datasets and include support for replication, rollback and
recovery. Moreover, most organisations already have in-house skills and
significant experience with their strategic RDBMS.
With large data warehouses (in the hundreds of gigabytes or terabyte
range), the advantages of an RDBMS over an MDDB becomes clear.
Some OLAP tools can provide multidimensional analysis of data stored in a
relational database. These relational OLAP (ROLAP) tools, provide a busi-
ness model and an OLAP engine that sits above the data warehouse. A
metadata layer is used to map the warehouse structure onto a multidimen-
sional model. The tool then generates the SQL necessary to retrieve data to
satisfy the user queries.
ROLAP tools work with the RDBMS in significantly different ways:
• some tools use the RDBMS to do all the data processing. To do this they
generate multi-pass SQL statements and create temporary tables in the
DBMS where necessary to process complex queries (this is the approach
adopted by MicroStrategy)
• some tools provide calculation functionality outside the RDBMS. SQL is
still generated to retrieve data, but calculations (including some joins or
aggregations) will be carried out by the ROLAP tool (this is the approach
adopted by Information Advantage).
Relational OLAP tends to provide strong support for applications that access
very large datasets, such as product or customer management applications
for large retailers. Such applications require analysis of many items (thou-
sands of products and possibly millions of customers) by a large number of
dimensions and therefore push the capacity limits of MDDBs.
Although ROLAP tools do not need to be concerned with sparsity directly,
aggregation values tend to be stored in the data warehouse for better per-
formance, and so sparsity remains an issue albeit one dealt with by the data
warehouse designer rather than the OLAP vendor.
Hybrid OLAP
Hybrid OLAP is most easily defined by saying that it is not pure MOLAP or
pure ROLAP. The reason for defining it in this way is that there are several
variants of HOLAP. In summary, the main ones are:
• an MDDB that can retrieve and analyse detail information
• a multidimensional store, optionally with some pre-consolidation.
Increasingly, the term is associated with the first definition and the wish to
combine the advantages of MOLAP and ROLAP.
An MDDB that can retrieve and analyse detail information
This definition of HOLAP is likely to become the generally accepted one, as
it is the term used in Microsoft SQL Server 7.0 OLAP Services. In the
Microsoft tool, HOLAP is defined as storing the summary data in the MDDB
and the detail data in the RDBMS. The user works with one model, which
transparently accesses two types of storage.
The significant aspect of the data storage option is its ease of use, rather
than the approach. Other vendors have supported analysis against data
retrieved from the RDBMS using ‘reach through’ SQL. When the data is
needed it is dynamically retrieved and processed by the MDDB engine.
A multidimensional store
Some client-based OLAP tools extract a selection of data from an RDBMS
and then construct a multidimensional cube (sometimes called a micro-cube)
on the client.
The functional difference between a multidimensional store and an MDDB
is that the latter provides a database manager layer that shields the data-
base users from the technical implementation. MDDBs thus provide a data
manipulation language (DML) that is used to access the data. Each of the
MDDBs offers a proprietary query language. There are also architectural
differences between MDDBs and multidimensional stores in terms of how
the data is stored.
Some vendors using this approach give the option of including pre-consoli-
dated values within the store (for example, Cognos), while others just store
the data and do consolidations as required. (for example, Business Objects).
Some vendors (such as Seagate) offer a wide variety of storage options for
these multidimensional stores.
Relational or multidimensional?
There has been a great deal of debate about the respective merits of the two
architectures. As outlined in Figure 9, each technology has its advantages
and disadvantages. While a few years ago there was an implicit assumption
that one approach was superior and would ‘win’, there is now an acknowl-
edgement of the strengths of both approaches. The focus of the debate is now
shifting towards defining the criteria to use in choosing one or the other, and
evaluating the extent to which HOLAP does, in practice, combine the best of
both worlds.
The most important issue is to understand your business requirements and
the implications for the type and size of the data you need to access now and
in the foreseeable future. It is important to consider how this specific goal
fits into the wider strategy for decision support in your organisation. Is this
a departmental application or the first step towards a larger enterprise-wide
information delivery system? Is the aim to provide information to a wider
user base or to provide strong analytical functionality to a much smaller set?
These are not contradictory aims, but it is important to know which has
priority. Armed with this information it is possible to look for an OLAP tool
that suits your needs. No single approach is sufficient for all situations and
needs will increasingly be met by a combination of tools from a variety of
vendors.
Figure 9 reflects the generic advantages and disadvantage of MOLAP and
ROLAP technology. Individual OLAP tools, of course, vary in the extent to
which they match the relevant profile.
Easy to set up and manage Time for data loading acts as limit
on scalability
Support for sales and marketing and A proprietary storage format (although
budgeting applications OLE DB for OLAP is emerging as the
de facto standard for data access)
Particularly suited to product and Not suited for budgeting and financial
customer management applications planning applications
OLAP architectures
The components of the OLAP system are implemented in various ways, so
there are several possible architectural configurations. The main
differentiators are:
• where the data is stored in a multidimensional store that is part of the
OLAP tool or in a relational database (or similar) that is outside the
scope of the tool
• where the processing takes place on the client, in the mid-tier server or
on the relational database.
Full mid-tier architecture Light mid-tier architecture Desktop architecture Mobile architecture
Metadata Metadata
Client
Data
MDDB
Desktop architecture
The principal difference between this and the light mid-tier architecture is
that the OLAP engine is on the desktop. The main consequence of this ‘fat
client’ architecture is that it cannot support web access. Vendors that origi-
nally had only desktop architectures have had to add a mid-tier server so
that the processing of queries can be moved off the desktop to support a thin
client.
As with the light mid-tier architecture, access time is reduced if the data for
queries is stored either locally or on a mid-tier server. Most tools support
this.
This architectural configuration requires good management support to
ensure that models on the desktop are synchronised to reflect one version of
the truth. The main limitation is the overhead of managing a large number
of fat clients, rather than centralising control on a mid-tier server.
The appeal of this architecture is that it is usually quick to deploy, with
minimum dependence on IT staff to configure the mid tier. This speed of
deployment is, however, predicated on the data for the models being in a
ready-to-use state.
Mobile architecture
The mobile architecture is similar to the desktop one, except that it must be
possible for the end user to download a useful subset of the data, sever the
link to the main data source, and still have the same functionality as if the
link has been maintained. This is less easy to implement using tools with
ROLAP architecture because, by definition, the data is stored in a relational
database. The crucial questions for these tools are whether they support a
caching mechanism that is independent of the mid-tier server, and whether
there is any reduced functionality when the cached data is being accessed
without the use of the mid-tier server.
A requirement of this architecture is the need to ensure that downloaded
models are synchronised with the model from which they were derived when
the mobile user next logs in. The user must be able to reconcile any changes
they have made to the structure of the model – such as the addition of new
calculated measures – to the latest version of the data. If write-back is
supported there is a further level of complexity to ensure synchronisation.
The advantages of this architecture are, as its name suggests, that it can be
used by a mobile workforce and thus extends the user base. The limitations
are the dangers of using out-of-date data and the need for sophisticated
mechanisms to ensure co-ordination with ‘the mothership’.
Guide to the evaluations
Future enhancements
Company background ........................................................................... 13
Customer support ................................................................................. 13
Distribution ............................................................................................ 13
Product evaluation ................................................................................. 13
Deployment ........................................................................................... 13
End-user functionality
End users, particularly those that do not use the system on a regular basis, need
to be able to easily find and use a previously created multidimensional business
model.
A high score is important if the tool is to be used by occasional users and those
with minimal IT skills. While power users will always get to grips with the tool,
casual users require more support. The score in this dimension will be of less
importance if the intended users of the tools are an elite group and there is little
concern with distributing resulting information to a wider audience.
Designers of the multidimensional business model need tools that offer enough
flexibility for the model to be built to fit the business needs.
A high score here is important if you want to ‘fine tune’ a complex business model
using the OLAP tool. A high score indicates that the tool offers more support for
tailoring dimensions and measures. A low score will not be of concern if the
intended data model is simple and largely a reflection of the data structures in
the data warehouse or data sources.
Defining dimensions
The lowest level of a dimension hierarchy will almost always be mapped
onto an actual data source. This is a field or column in a relational data
source, or its equivalent in non-relational sources (for example, comma
delimited text files). We expect levels above to offer more options, so that
they can be directly mapped onto data sources or be defined by the designer.
The tool should offer support for alternative drill-down patterns in the
dimension hierarchy.
Defining measures
The major issue with regard to measures is the support the tool gives for
specifying calculated measures. Most tools will allow data to be defined
using arithmetic, relational and logical operators. Minimal support is pro-
vided if the complexity of derived data is limited to what can be specified in
SQL. It is preferable if the tool offers a range of functions that enable more
complex calculations to be defined.
A high score is essential if your users intend to use the OLAP tool for complex
analytical work, business modelling or forecasting, probably as a business
analyst or a power user. A low score will be acceptable if your analysis consists
of manipulating and presenting historical data, rather than applying complex
formulae or algorithms to the data.
Some end users require specialised support, beyond the simple manipulation
and presentation of multidimensional data. Spreadsheets (such as Microsoft
Excel and Lotus 1-2-3) already provide much of this analytical functionality,
and we expect the tools to provide tight integration with spreadsheet inter-
faces. Integration with other tools offering specialised analysis is highly
beneficial to specialised users.
Data mining capabilities, either inherently or through integration with other
tools, give additional support to the user.
In forecasting and budgetary applications, a critical requirement is the need
to be able to write back data to the business model so that new values,
dependent on this, can be calculated. This is equivalent to ‘what-if ’ analysis
using a spreadsheet, but with a more complex set of interacting factors.
Web support
To fully exploit the Web, tools should support web publishing and the exploration
and creation of models via a web browser.
A high score is crucial if the intention is to empower a large number of users with
OLAP at minimal cost, or if users require ‘access from any desktop’, but will be
less important if the intention is to constrain the use to a small group of power
users equipped with standard PCs.
Tools should offer support for the management of models, data and users that is
easy to use and reduces the workload of the administrator.
Management of models
The main issues in managing models are security and query monitoring.
Information from monitoring the use of models enables the administrator to
respond to changing demands, and supports the process of tuning both the
models and the data sources that feed them.
Management of data
The management of persistent data is more complicated. In MOLAP, by
definition, the data used in the multidimensional business model is stored in
a multidimensional database. In ROLAP, while the main data source is a
relational database, all the tools evaluated also make use of a persistent
data cache outside the RDBMS. This is required both to reduce query times
and to minimise the number of calls to the source data. Regardless of the
storage mechanism, all data organised for multidimensional analysis will
explode in size if a large proportion of the potential consolidations are pre-
calculated, and will therefore require support for size management. In the
case of ROLAP tools, this is the responsibility of the data warehouse and
outside the scope of the tool. With a MOLAP-type store, it is managed within
the tool.
The detailed management tasks associated with MOLAP, ROLAP and
HOLAP are different, but the main issue in all cases is the quality of support
the tool gives for ease of management.
Some of the issues common to all types of persistent storage are scheduling,
distributing the data and informing the user of the currency of the data.
Management of users
The tool should allow the administrator to define individual and group
security profiles. Query governance should be provided to ensure that users
are at least warned about, if not prevented from, making resource-expensive
requests.
Adaptability
A high score is important if the data sources or user requirements are likely to be
volatile and the implementation is medium to large. A low score will be accept-
able in the unlikely situation that both user requirements and data sources are
considered to be comprehensive and stable.
The adaptability of the system to cope with changes at both ends of the
process (that is, the data sources used and the business requirements) is
important for all OLAP systems.
Another aspect of adaptability is the extent to which the tool can mix-and-
match data storage options; for instance, can data for a business model be
stored in an MDDB and/or a relational database, and can these options be
modified once the business model has been created? This flexibility gives the
designer choices about how to optimise the system.
An OLAP environment combines a number of potentially volatile elements.
These include the requirements of the end user. Although it is essential that
a requirement specification be drawn up before the models are built, new
needs will emerge as users make use of the system. If the requirements
change, it is a sign that the system is being used, not that it was poorly
defined.
Further changes will be required as different types of users are added to the
system. It is therefore essential that the tool provides mechanisms to make
it as straightforward as possible to adapt to changing requirements.
One of the important aids to ensuring ease of adaptability as requirements
change is comprehensive and well managed metadata. Most OLAP tools
interpret metadata as meaning source table schemata and the business
names given to columns. We regard this as a minimal requirement. The next
step up, in terms of the quality of metadata, is to enable the designer to add
descriptive text to some of the objects (for example, models, dimensions and
measures). Ideally, the metadata should be much richer and with structures
to capture author details, versions and changes, dates, data derivation and
end-user annotations, and then give the user facilities to search the
metadata. In general, few tools offer this quality of support.
The ideal situation is that rich contextual metadata is available in a control-
led repository, so that access and visibility can be managed from within the
toolkit. The metadata should be searchable to allow users and designers to
easily find relevant models and model components.
You need to be wary of how this section is interpreted, because one benefit of
limited metadata (for instance, if it merely captures the schematic details) is
that it makes it much easier to keep it synchronised with the models. Thus, a
high score in this perspective coupled with a low score for the business
model perspective could indicate that the model is easy to synchronise
because it is basic.
Performance tunability
The administrator needs tool support to enhance performance by tuning the data
extraction and the data manipulation processes.
A high score is essential whatever the scope or nature of the OLAP operation. A
low score is only acceptable in the short term if the users have either not been
getting the information at all or waiting for requests to be processed by the IT
department. In the longer term, a low score is unacceptable for all tools.
We consider the support the tool provides to develop applications that include
multidimensional data in tabular and chart format that the user can interactively
explore.
At a glance
This one-page overview summarises the product and its principal features.
It includes:
• the name and principal location of the product vendor
• the name/s and version number/s of the product/s evaluated and their
release date/s
• three key facts about the product, generally the type of OLAP tool (for
example, MDDB, relational OLAP or desktop), the platforms it runs on
and an interesting fact about the product or company that might affect a
purchasing decision
• three strengths of the product
• three points to watch, or aspects that may be weaknesses of the product
in some circumstances
• the ratings chart – a tabular summary of the product’s scores on the
evaluation perspectives.
Ovum’s verdict
What we think
A summary of Ovum’s opinion of the product (good, bad and neutral), with
the reasons.
When to use
A description of the circumstances in which you should shortlist the product
and those in which it is less suitable.
Product overview
Components
The main components of the product and their version numbers are listed
and described.
Figure 1 Four OLAP architectures
Full mid-tier architecture Light mid-tier architecture Desktop architecture Mobile architecture
Metadata Metadata
Client
Data
MDDB
Architectural options
We describe whether the toolset can be configured in each of four
architectures:
• full mid-tier architecture
• light mid-tier architecture
• desktop architecture
• mobile architecture.
Figure 1 summarises the main storage and processing features of the
architectures, which are described in more detail in The anatomy of an
OLAP tool in the OLAP fundamentals section.
Company background
History and commercial
The company’s history, other product lines, revenue and profitability.
Customer support
The support, training and consultancy available to purchasers of the
product.
Distribution
The name, address and telephone numbers for the company’s main contact
in the US, Europe and Asia-Pacific. Includes the web address of the vendor.
Product evaluation
Each of the vendor’s OLAP toolsets is evaluated along eight perspectives:
• end-user functionality
• building the business model
• advanced analytical power
• web support
• management
• adaptability
• performance tunability
• customisation.
These are described in detail in the next section.
Deployment
Platforms
The platforms that the server and client component/s run on.
Data access
The data sources that the tool can access; for example, relational databases,
comma delimited files, personal productivity tools (such as Excel), third-
party MDDBs and data from ERP applications.
Standards
Whereas the relational database world has standardised on SQL, there are a
various competing standards in the MDDB world. The OLAP Council offers
a variety of standards including version 1 and 2 of the OLAP Council specifi-
cation, Microsoft’s OLE DB for OLAP (both as a consumer and a provider)
and, finally, there are proprietary but published standards for accessing
MDDBs. We state which standard/s the product supports.
The importance of this information is that it determines compatibility
between products from different vendors.
Published benchmarks
We describe any published benchmarks for the product, but advise caution
in attributing too much importance to these measures because:
• the methods used are open to considerable interpretation and debate
• the leadership pattern is unstable in an area such as OLAP, which is new
to performance measurement.
Price structure
The pricing structure, as supplied by the vendor, is described. However, we
advise prospective purchasers to contact vendors directly for details concern-
ing site licences and volume discount deals.
The evaluations in detail
End-user functionality
Summary
A brief discussion of the product score in this dimension.
Basic design
Design interface
We describe the design interface and give credit for an easy-to-use GUI
interface.
Visualising the data source
We give credit if the tool enables the developer to see a sample of source
data as well as the schema, because this informs decisions about mapping
fields onto dimensions and measures.
Universally available mapping layer
The dimensions and measures in the multidimensional business model have
to be mapped onto fields in data sources. This may be done directly or there
may be a ‘mapping layer’ between the logical business model and the
physical data. The mapping layer acts as a catalogue of the data sources,
with the replacement of any cryptic column names with meaningful business
names, sometimes with the addition of metadata above data transformation.
The developer of the multidimensional business model then works from the
data definitions in the mapping layer, rather than directly with the data
sources. The advantages of this are that it is easier to build the cubes as the
meaning of the data is clearer, and it can also be used to insulate the model
user from changes to the source data. If the name or location of a source
data field changes, the administrator only has to change the reference in the
mapping layer, and all models using this field will continue to work without
further modification.
Credit is given for support for this facility.
Prompts for metadata
While metadata about structural details can be captured automatically,
contextual details such as a description, author, contact details and rationale
have to be entered manually. We give credit if the tool automatically prompts
the developer for such details.
Multiple designers
Multiple designers
We describe the support the tool gives to prevent multiple users overwriting
each other’s work. Credit is given for appropriate mechanisms.
Support for versioning
Versioning support is extremely useful whether one or several designers are
working on a model. It is described and credit given for any versioning
support directly provided by the tool.
Data mining
OLAP tools adopt one of three positions with regard to data mining:
• they ignore it
• they provide some built-in data mining functionality
• they integrate with another tool.
The built-in data mining functionality may be based on decision tree algo-
rithms, inductive reasoning, pattern matching, cluster analysis or neural
network technology. If the support is provided by integration with another
tool, credit is given regardless of whether the tool originates from the OLAP
vendor or a third party; the main issue is the need for ease of integration. No
credit is given for joint marketing ventures with no technological substance
behind them.
We describe the nature of the support and give credit for both provision
within the tool and close integration.
Data mining tools and methods are evaluated in greater detail in Ovum
Evaluates: Data Mining.
Web support
Summary
A brief discussion of the product score in this dimension.
Management
Summary
A brief discussion of the product score in this dimension.
Management of models
Separate management interface
There are separate roles with different skill sets necessary to support OLAP.
For ease of administration it is preferable that the management of the
system is done via an interface designed for this purpose. Here we give
credit for a graphical interface designed for the management function.
Security of models
OLAP information requires the same level of security as database informa-
tion. While it is self-evident that data stored in MDDBs requires access
control, it is also the case with ROLAP tools as all of them store data persist-
ently to enhance performance. We do not give credit if the tool relies on the
security of the databases supplying the data, but only if there is a separate
and convincing security mechanism within the OLAP tool.
Query monitoring
Query monitoring is required both to tune the system for performance and
to tailor its content. The most popular queries may need to be optimised in
various ways, such as the provision of pre-calculated aggregate tables or
caching the data locally. Query monitoring also assists in ensuring user
satisfaction by helping the developer tailor the content of the business
models according to usage.
Query monitoring should provide the administrator with details about the
use of reports (for example, which reports are run when and by whom) as
well as processing details (for example, average, mean and mode times for
processing, number of records processed and so on).
Credit is given for support for these functions, but the score is reduced if
there is poor integration with the rest of the toolkit.
Management of data
How persistent data is stored (not scored)
Here we describe the storage mechanism used by the tool; this is not scored.
Scheduling of loads/updates
As all the tools we have evaluated have some form of persistent data store,
they require scheduling support to control the update process. We give credit
for an easy-to-use interface offering a wide range of options. We do not give
credit if the tool relies on the scheduling facilities of the operating system or
third-party tools, unless these are very well integrated with the rest of the
tool’s management facility.
Rather than having to individually define schedules for each business model
it should be possible to name a specification and then re-use it as required.
Extra credit is given if there is support for this.
Event-driven scheduling
Being able to define scheduling as contingent on events, such as the comple-
tion of a data load process in the data warehouse, gives extra flexibility to
the tool. Credit is given for an easy to use means of doing this.
Failed loads/updates
If an update fails the administrator needs to know this, needsto know why it
has happened and should ideally be able to specify that the failed update is
automatically resubmitted a set number of times. Comprehensive error
reporting is extremely important to assist in resolving the problem.
Credit is given for the breadth and depth of scheduling support.
Distribution of stored data
The administrator should be able to specify whether the stored data is held
on a local client, a central server or anywhere on the network. We give credit
if the administrator has these options.
Sparsity (only for persistent models)
We expect tools that include consolidated aggregates to have a method for
handling sparsity and thus minimising the data explosion that results from
the storage of aggregates of sparse data. Here we describe the way in which
the tool handles sparsity and deduct credit if the tool does not combine ease
of use with effectiveness.
(ROLAP tools, for which the management of this within the OLAP tool is not
an issue, get full credit.)
Methods for managing size
This is less of an issue in ROLAP, where the decisions about aggregates and
indexing are the responsibility of the data warehouse or data source admin-
istrator and outside of the scope of the OLAP tool. In MOLAP, the issue of
how to deal with the explosion in size resulting from pre-computed data has
to be dealt with by the OLAP tool.
The final size of a multidimensional structure is primarily a function of the
number of stored pre-calculated aggregates, which is made more acute if the
data is sparse.
Credit is given if you can select which aggregates are pre-calculated and
additional points are awardedif there is wizard support for this.
In memory caching options
Credit is given for support analogous to that provided in mature RDBMS
products, which allow the DBA to configure the size and use of the cache to
optimise performance; for example, for particular users or tables. We give
credit if there is some support to enable the administrator to adjust the size
of the cache, and extra credit if there is wizard assistance to reduce the skill
set necessary to make these adjustments.
Informing the user when stored data was last uploaded
The user should be able to find out the currency of the data; for instance,
‘when was customer credit rating last updated?’. This may require the
system to reach back and retrieve upstream metadata from the data ware-
house. Here we describe any facilities the tool offers to support this and give
credit if it is easy for the user to ascertain when the data in the model was
last refreshed. Additional credit is given if this can be supplemented with
information from load processes further upstream.
Management of users
Multiple users of models with write facilities
Relational databases generally offer facilities to prevent update errors when
multiple users access the same data. If an MDDB allows users to write
values back, it must provide similar locking mechanisms to prevent lost
updates.
We describe the mechanism used by the tool and give credit if it locks for
writing but allows read-only access.
User security profiles
We describe the way in which individual and group profiles are defined.
Credit is given for a system that supports a heterogeneous user community
with a granularity which allows visibility, read and write permissions to be
controlled at an individual level.
Query governance
If it is possible for users to issue ‘the query from hell’ that monopolises the
processing capabilities of the system, then it is necessary to have some form
of query governance to prevent inexperienced or overly demanding users
from bringing the system to its knees.
(MOLAP tools, for which this is not a problem, receive full credit.)
Effective query governance has several levels, from the ability to inform
users of the time a query will take, to the prevention of queries above a
defined threshold. Credit is given depending on the range and sophistication
of available options.
Restricting queries to specified times
A feature that can be useful to allow for maintenance work or to control the
usage pattern is to be able to restrict users to certain days or times of the
day. Here we describe and give credit for the available options.
Management of metadata
Controlling visibility of the ‘road map’
There is a need to be able to hide politically sensitive data and credit is
given if this is supported. So the general manager may be able to ‘see’ a
dimension relating to personal productivity, but the other employees cannot.
Adaptability
Summary
A brief discussion of the product score in this dimension.
Change in business requirements
Adding new dimensions to a model
The nature of OLAP is that end users will request additions to the model, no
matter how thorough the requirements phase. This is a sign that the system
is being used, not an indication of a poor requirements spec. Here we give
credit for the ease with which new dimensions can be added to the business
model and any change management facilities to support this.
We give extra credit for a system that incorporates a mapping layer, which
defines the data in the data source in business terms. When the business
model is created, the developer uses the resources defined in the mapping
layer rather than the original source data. The advantages of this approach
are that it ensures consistency of terms, it reduces duplication of effort and
the layered approach is easier to manage.
Re-use of dimension definition
Adaptability is facilitated if dimension definitions can be re-used. Credit is
given if the newly created dimension can be named, described, stored and
easily retrieved.
Adding new measures to a model
Just as end users will request the addition of new dimensions, they will also
want to incorporate new measures into the model. This is credited as in Re-
use of dimension definition.
Re-use of calculated measure definition
Adaptability is facilitated if calculated measures can be reused. For maxi-
mum flexibility, these should allow the base measures to be referenced by
either a name or an index.
Credit is given if the newly created measure can be named, described, stored
and easily retrieved.
Changing the architecture to reflect business needs
A high level distinction between MOLAP and ROLAP architectures is that
the first is optimised for speedy retrieval but has limited scalability, whereas
the latter can deal with datasets with large numbers of dimension members
but is slower. If user requirements always clearly fell into one camp or the
other, the choice of tool could be heavily influenced by its mode of use. The
reality is that users’ needs do not always point clearly to one mode of stor-
age.
For instance, a system that initially seems to require a MOLAP-type solution
may then incorporate data sources that put pressures on the scalability of
the MDDB. Conversely, end users may, in practice, only use parts of what
originally appeared to be data sources with many millions of dimension
members and could benefit from conversion to a MOLAP-type solution.
Another solution is a HOLAP one, in which summary data is held in a
MDDB or similar and the detailed data is held in a relational database and
retrieved as required. The user should be unaware of the source of the data
being viewed.
Here we describe and give credit for the ease of changing the architecture to
align it with new user requirements.
Metadata
Synchronising model and model metadata
The model and information about the model need to be synchronised. Some
parts of this process can be automated (for instance, if the description of the
dimension includes the number of members it contained), but inevitably
much of the metadata is manually entered. The simplest, and probably most
effective, way of ensuring synchronisation is if the system automatically
prompts for new metadata when edits are made.
Credit is given for the effectiveness of ensuring synchronisation. In cases
where there is no metadata to synchronise full credit is given.
Impact analysis
Changing the data sources affects on the business models and this, in turn,
affects on any reports based on these. Credit is given for tools that support
impact analysis so that the consequences of changes can be anticipated and
dealt with in advance.
Metadata audit trail (technical and end users)
If the history of the metadata is stored then end users and technical devel-
opers can use this to get an understanding of how the models have changed
over time. Additional credit is given here if end users can easily carry out
such an audit.
Access to upstream metadata
Adapting the system is easier if the designer has access to full information
about the data. Here we describe any integration with third-party tools that
gives access to metadata generated during the extraction part of the process.
Ideally this metadata will capture details about the sources, upload details,
transformations and the quality checks carried out on the data, as well as
descriptive text.
Performance tunability
Summary
A brief discussion of the product score in this dimension.
ROLAP
Multipass SQL
ROLAP tools issue SQL queries against relational databases to retrieve the
data required to build the business model. The data can either be retrieved
by a single SQL query or using multipass SQL. With multipass SQL, as its
name suggests, multiple SQL queries are generated and processed, stored in
temporary files and finally combined after processing is complete. The
advantage of multipass SQL is that more complex queries can be supported.
For example, calculations requiring aggregation at different levels within a
dimension; so to show sales at regional level as a percentage of sales at
country level requires two passes, one to get the sum of sales at regional
level and the second to get the sum of sales at country level. These are then
combined to get the percentage.
Credit is given if the tool supports multipass SQL.
Options for SQL processing
SQL processing, such as sorting and ranking, can either be carried out on
the database server or the OLAP server. The advantage of using the data-
base server, particularly for operations such as filtering for the top ten, is
that the network traffic can be reduced. However, the drawback is that
complex processing requires the creation of many temporary tables which
can cause a bottleneck.
We give credit if the developer has choices about whether the data process-
ing takes place on the database server or the OLAP server, or if the system
intelligently balances the processing.
Speeding up end-user data access
Retrieval time is an issue for ROLAP tools. There are two parts to the
process: the retrieval of the data and the calculation of the cross-tab results
from it. Data access can be speeded up by the storage of data in relational
tables once it has been retrieved, or storing it once it has been further
processed for cross-tab display, that is, in a more optimised form. However it
is stored, the end user needs to be aware that they are using stored rather
than freshly retrieved data, and should be informed about the currency of it.
Credit is given if data can be stored for re-use and the user is always aware
that stored data is being accessed.
Aggregate navigator
Aggregate navigators process SQL queries so that they automatically make
use of summary tables and thus speed up retrieval time by minimising the
processing. Credit is given if the tool offers integration with an aggregate
navigator or equivalent built-in functionality.
MOLAP
Trading off load time/size and performance
Load time, rather than end-user performance, is a particular issue for
MDDBs. A major contributing factor is the re-calculation of stored aggre-
gates. Credit is given if the tool offers support so the administrator can
decide how to trade off the poorer performance resulting from an incomplete
set of precalculated aggregates against the faster load time and reduced size
resulting from this. Extra credit is given if the tool minimises the effect of
adding new data by making use of metadata about the dimension and
recalculating only those values that are affected. (This is sometimes called
incremental roll-up.)
Processing
Use of native SQL to speed up data extraction
From an OLAP tool vendor’s point of view, OLE DB or ODBC is the simplest
way to connect to data sources for the extraction of data, as it means that
only one set of SQL commands has to be produced regardless of the type of
database being accessed. However, if the OLAP engine can generate native
SQL for data extraction, the extraction process can frequently be speeded
up.
Credit is given if native SQL can be used to extract data from the major
RDBMSs.
Distribution of processing
The OLAP engine is responsible for data extraction, the calculation of
aggregations and the creation of cross-tabular data. There is the danger that
as the number of end-user queries increases it creates a bottleneck. The
most obvious way of avoiding this is to distribute the processing with
automatic load balancing. Here we describe and give credit for such
facilities.
SMP support
Parallelism speeds up processing. Here we give credit if the server compo-
nent of the tool is based upon a multi-threaded architecture that can take
advantage of symmetric multiprocessing.
Customisation
Summary
A brief discussion of the product score in this dimension.
Customisation
Option of using a restricted interface
Although most of the tools described in these evaluations are easy to use, the
range of options available means that there is always a learning curve for
the new or occasional user. What is needed is a means of producing a suit-
able interface for such users.
There are two approaches to the problem:
• for the tool to offer a restricted interface option
• for the developer to be able to produce, essentially using point-and-click
rather than programming, a simple-to-use front end to selected models.
The option of a restricted interface is evaluated here, and the second option
in the next section.
Ease of producing EIS style-reports
We describe how a simple-to-use front end is created within the tool and give
credit if it is straightforward.
Applications
Simple Web applications
The problem this section addresses is identical to the one described above
under Ease of producing EIS-style reports, except that we are now consider-
ing the production of such an EIS-style report that can be viewed with a Web
browser.
Credit is given for ease of producing such an application.
Development environment
The development of applications is greatly facilitated by the provision of an
OLAP-specific development environment that includes components such as
tables supporting drill-down, and linked chart and visual display options.
General application development languages such as Visual Basic and C++ do
not provide such components: they are only provided in the specialist OLAP
application development environments.
Credit is given for the nature and quality of such specialist support.
Use of third-party development tools
The drawback of the specialist development environment is that it requires
the programmer to learn another language. Here we describe whether
applications can be developed in a familiar programming environment such
as Visual Basic, C++ and/or Java. Credit is given for the number of such
environments supported.
Summary............................................................................................................................. 2
Growth of the business intelligence market ........................................................................ 4
Trends in business intelligence ......................................................................................... 10
Key messages for the market ........................................................................................... 14
Article: Market analysis and forecast Ovum Evaluates: OLAP
Summary
Growth and uncertainty
The OLAP software tools market is worth $1.5 billion in 1999 and will grow
steadily to a $4 billion-plus industry by 2003. The market is complex and is
characterised by a lack of clear leaders, a large number of vendors and a
complex web of alliances and partnerships. Accompanying this growth is
uncertainty. New entrants, such as Microsoft and SAP, add a new dimen-
sion to the market and have radically changed its structure.
The OLAP market cannot be viewed in isolation – it overlaps with a wider
‘market’ defined as business intelligence. Business intelligence is a high
growth sector which Ovum predicts will have an overall spend (including
software, hardware and services) of over $20 billion at the start of the new
millennium (see Figure 1).
Underlying the steady growth are radical changes in the way that business
intelligence is packaged and delivered. There are significant trends in:
• how business intelligence systems are built
• what is being built.
The most significant change is the transition from a ‘build’ to a ‘buy’
paradigm.
60
40
$ billion
20
0
1999 2000 2001 2002 2003
Business intelligence
Business intelligence is a wide term that requires definition of both
processes and technology. It includes:
• technology-related processes, such as extracting data from a company’s
operational systems and databases and integrating that data into a
coherent whole in order to make it suitable for analysis
• business-related processes, such as determining the appropriate forms of
analysis required to support decision making, disseminating and
interpreting the results of analysis, and determining how best to feed
back the results into a company’s operations.
These processes are supported by a wide range of closely related tools and
technologies that make up the business intelligence market:
• OLAP – including multidimensional databases, relational OLAP engines
and front-end OLAP clients
• query and reporting
• data mining
• data extract, transform and load (ETL)
• relational DBMSs.
Typically, a large business intelligence implementation will use many, if not
all, of these tools, as well as a significant amount of consulting.
Market growth
Ovum’s estimate of the industry spend on business intelligence software
(excluding services) over the next five years is shown in Figure 2. We expect
business intelligence to be a more popular area for new development in
1999 than OLTP. But there are still a significant number of user organisa-
tions that are holding off development until after 2000, when the market
will experience rapid growth.
40
30
$ billion
20
10
0
1999 2000 2001 2002 2003
15
10
$ billion
0
1999 2000 2001 2002 2003
Data warehousing
Business intelligence is often closely associated with data warehousing.
While there is considerable overlap in terms of definition, data warehousing
mainly focuses on the technical process of building and maintaining a store
of data that is specifically intended to be used for decision support.
There are back-end processes for loading data into the data warehouse
using ETL tools, and front-end processes for accessing and analysing data
using OLAP and data-mining tools.
For some years now, OLAP vendors have benefited from the surge of inter-
est in data warehousing (and datamart) implementation. Most medium to
large organisations have a data warehouse strategy in place, and most of
those data warehouses are feeding data to one or more OLAP tools.
Market composition
A survey of the OLAP market shows that it is very fragmented. There are
more than 30 vendors providing a wide spectrum of OLAP products, though
not all are direct competitors. Despite the consolidation that might have
been expected, and the number of mergers and acquisitions that have
recently occurred, the number of major players in the market has remained
more or less the same. New entrants into the OLAP market are Microsoft,
SAP and the merchant database vendors, and their impact is already being
felt.
3
$ billion
0
1999 2000 2001 2002 2003
Most OLAP vendors are increasing their revenues, even though they may
be losing market share. There is no clear-cut leader; the largest player is
Hyperion Solutions (formed by the merger between Hyperion Software and
Arbor Software in August 1998), which has just over 20% of market share.
Analytical applications
Analytical applications are sold either as packaged ‘ready to go’ applications
(conceptually a ‘datamart-in-a-box’) or complete toolkits. There are several
specialist vendors that provide these types of products: Hyperion Solutions,
Comshare, Kenan, Gentia Software, SAS and Seagate Software. ERP ven-
dors such as SAP and PeopleSoft are also entering this segement by selling
packaged vertical datamarts, which include specialised OLAP applications
that analyse data from the OLTP systems.
As many of these analytical applications are aimed at vertical or horizontal
markets, there is room for many vendors to happily co-exist. But the suc-
cessful vendors will be those that can identify a lucrative niche market and
gain significant market share within it. In the long term there may only be
room for two or three major players in each narrow niche.
Packaged analytical applications represent the newest market segment in
business intelligence. We expect the number of applications to increase
dramatically over the coming years.
Market development
Over the next five years, the market will be shaped by large, influential
players, notably Microsoft and SAP. Microsoft’s SQL Server 7.0 OLAP
Services, and SAP’s Business Information Warehouse (SAP BW) add a new
dimension to the market:
• Microsoft’s entry will serve to raise the profile of OLAP considerably and
accelerate its adoption beyond its traditional niche (namely large
corporate finance departments)
• Microsoft’s competitive marketing strategy of bundling OLAP
capabilities into SQL Server 7.0 will expand OLAP at the low- to mid-
end of the market
• the backing of a large influential vendor also has the potential to
address the interoperability issues that have dogged OLAP for some
years. The OLE DB for OLAP API standard, spearheaded by Microsoft,
is rapidly becoming an industry standard that is enabling OLAP servers
and clients from different vendors to work together, and is stimulating
ISVs to produce a new generation of tools, applications and clients
• SAP BW raises the stakes by providing a low-cost, high return on
investment packaged data warehouses with integrated OLAP analytical
applications. This is a direct challenge to vendors (both of OLAP and
ETL origin) that previously managed by selling point tools to access and
analyse SAP data.
Microsoft’s entry drives OLAP closer to a commodity software market.
Using a combination of simplicity, pricing and bundling, Microsoft is aiming
to make OLAP servers almost as widely used as relational databases. OLAP
servers will be increasingly treated as commodity components that can be
bought, configured and embedded without a great deal of time and effort.
SAP is also positioning its BW product as a solution that can be used out of
the box, where OLAP is a value-added component.
% of projects
requiring tool
selection
% of projects
which are ‘buy’
rather than build
Time
• the vendor has the ability to provide a one-stop shop for the ‘soft’
components of the system – licence negotiation, customer support and
services, stability and longevity – and the value of these to the IT user
organisation. Some vendors are much more capable than others in these
areas. Moreover, the very wide range of functionality and maturity of
the components of one-stop shop solutions makes it very important for
users to check the technical suitability of the solution before considering
the added value of ‘soft’ factors.
There are several vendors offering to relieve IT users of the pain of OLAP
tool selection and the subsequent integration. We assess the different sets of
players and their offerings, and assess their worth to user organisations as
well as the pitfalls of each approach that you should be aware of.
ISVs
A number of independent software companies position themselves as offer-
ing a complete ‘turnkey’ business intelligence solution using their own
products. These range from the relatively small start-up companies (gener-
ally associated with datamarts) such as Sagent and Broadbase, to the
larger, well established software producers, such as Platinum Technology
and SAS Institute. While it is not easy to generalise about the added value
of these infrastructures, three observations can be made:
• in a small scale business environment, with limited access to IT support,
there is undoubtedly added value over an unintegrated set of tools
• not all components are equally strong, so a best-of-breed infrastructure
would not necessarily deliver greater functionality
• the benefits of integration, particularly metadata, have not been fully
exploited.
Merchant database vendors
A number of database companies have assembled ‘end-to-end’ solutions by
OEM’ing components from point tool vendors. Examples include:
• Sybase, which includes WhiteLight Systems’ WhiteLight ROLAP server
as an OEM extension to its Warehouse Studio (under the brand name
‘Power Dimensions’)
• IBM, which licenses Hyperion Solutions’ Essbase multidimensional
server for its IBM DB2 OLAP product and is a core element in its Visual
Warehouse offering.
These vendors claim to offer above and beyond what OLAP vendors are
offering. The real question is the extent to which there is real integration of
the components or whether the integration is primarily in the presentation
layer. In most cases, adopting the database vendor’s package is effectively
adopting the database vendor’s own best-of-breed selection. The primary
benefit is the reduction in decision-making effort, but seldom does the value
of the whole much exceed the sum of the value of the parts in technical
terms.
Systems integrators
Increasingly, systems integrators are developing business intelligence (and
data warehousing) solutions as one of their key service offerings. Often the
systems integrators are the delivery mechanism for one of the business
intelligence solutions described above. Where implementation is done by a
systems integrator it is often difficult for the customer to know just how
‘out of the box’ the solution really is.
The added value of using a systems integrator to provide a one-stop shop
solution is that it insulates the user from tool choice, implementation and
integration issues. Again, the degree of added value depends on the amount
of integration needed. When, and if, packaged analytical solutions deliver
on their promises, the amount of added value will diminish for integration,
and service providers will concentrate more on providing assistance with
data analysis and using the results to improve business performance.
Hardware vendors
The ‘one-stop shop’ offerings of hardware manufacturers are typically a
collection of components from third-party vendors; the added value is
limited to the benefits of dealing with one party and non-technological
issues such as licensing and pricing.
Many of these hardware vendors act mainly as systems integrators –
though this is not clear from their marketing.
We would expect, in the short term at least, the lure of packaged analytical
solutions to override such concerns for many user organisations. But in the
longer term, IT users will need to ensure that vendors of packaged analyti-
cal solutions have a convincing story about integrating point solutions with
an enterprise-wide business intelligence strategy. Otherwise, many users
who buy the packaged analytical strategies will find themselves building
data warehouses as well, to address the inflexibility of this approach.
Criteria
1 2 3 4 5 6 7 8 9 10
BusinessObjects
DecisionSuite
Essbase
PowerPlay
Seagate Holos
1 2 3 4 5 6 7 8 9 10
BusinessObjects
Essbase
PowerPlay
Seagate Holos
DecisionSuite
1 2 3 4 5 6 7 8 9 10
Seagate Holos
Essbase
PowerPlay
DecisionSuite
BusinessObjects
1 2 3 4 5 6 7 8 9 10
BusinessObjects
DecisionSuite
Essbase
PowerPlay
Seagate Holos
1 2 3 4 5 6 7 8 9 10
BusinessObjects
PowerPlay
DecisionSuite
Essbase
Seagate Holos
1 2 3 4 5 6 7 8 9 10
Essbase
Seagate Holos
BusinessObjects
DecisionSuite
PowerPlay
1 2 3 4 5 6 7 8 9 10
Seagate Holos
Essbase
PowerPlay
DecisionSuite
BusinessObjects
1 2 3 4 5 6 7 8 9 10
Seagate Holos
Essbase
DecisionSuite
PowerPlay
BusinessObjects
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 6
Future enhancements ........................................................................... 14
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 29
Data access .......................................................................................... 29
Standards .............................................................................................. 29
Published benchmarks .......................................................................... 29
Price structure ....................................................................................... 29
Evaluation: Applix – Applix TM1 Ovum Evaluates: OLAP
At a glance
Developer
Applix, Westboro, Massachusetts, USA
Versions evaluated
Applix TM1 version 7.1
Key facts
• A MDDB that runs in memory and works with standard spreadsheets or
OLE DB for OLAP clients
• Server runs on Windows 95, Windows 98, Windows NT and Unix; clients
run on Windows 3.1, Windows 95, Windows 98 and Windows NT
• TM1 was the first OLAP vendor to support Microsoft’s OLE DB for OLAP
as a data provider
Strengths
• Extremely quick response times provide fast OLAP analysis for small
datasets
• No time-consuming precalculation or batch loading of the MDDB
• TM1 architecture is well suited for remote and mobile computing
Points to watch
• Does not yet support its own web client – web access relies on third-party
tools
• Performance can be adversely affected by large concurrent user loads
when accessing models with high volumes of data and complex OLAP
calculations
• Simple modelling tools that are best suited to small data volumes
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Ovum’s verdict
What we think
TM1 takes to heart the concept of OLAP as ‘a spreadsheet on steroids’. In
this respect, it will be most appreciated by financial users wishing to com-
bine the flexible display and ad hoc calculations of spreadsheets with the
performance of a multidimensional database.
TM1 scores modestly according to our evaluation criteria, but in some
specialised applications it has definite strengths. Its best feature is its
memory-resident OLAP engine, which is unique on the OLAP market and
provides exceptional performance. TM1 multidimensional databases can be
put in memory and calculated quickly in real time. TM1 only stores the
lowest level of model data in the OLAP engine and calculates aggregations
on demand, so it avoids the ‘data explosion’ problems associated with
MDDBs. It also removes the need for batch recalculations each time fresh
data is uploaded. TM1’s small data footprint is ideal for mobile computing
environments and strong data replication and synchronisation capabilities
are provided for disconnected analysis.
TM1 performs best as a ‘local’ desktop OLAP solution; its exclusive calcula-
tion-on-demand approach is optimised for small data volumes and small
numbers of users. Query performance can be affected by large numbers of
concurrent read-write users working with models that contain high volumes
of data with deep hierarchies and complex calculations. However, new
features in version 7.1 make the product more credible as a scalable server
product. The spreadsheet interface provides a simple and flexible OLAP
front end, but TM1 would benefit from more robust modelling tools.
TM1 exploits the OLE DB for OLAP interface to provide web access and
advanced analysis and reporting functions through third-party tools. Hence,
the level of functionality is dependent on the front-end tool used. There is
little support for developing custom analytical applications, although TM1
can easily be embedded into third-party systems. Overall, TM1 is a mature
OLAP tool, with many reference sites, but it has suffered from poor market-
ing in recent years. Applix will need to shout loud and clear to get its ‘real-
time OLAP’ message across in an increasingly competitive OLAP server
market.
When to use
TM1 is suitable if you:
• want to support financial planning, budgeting and forecasting
applications that require relatively small data volumes
• want to build large business models of five of more dimensions, where
traditional MDDBs would ‘explode’ the data
• have dynamic applications that require frequent input of data,
recalculation, or analysis of ‘what-if’ scenarios
• have existing spreadsheet skills you want to exploit
• want to support mobile users for offline analysis.
Product overview
Components
TM1 version 7.1 consists of the following components:
• TM1 Server
• TM1 Perspectives
• TM1 Client
• TM1 Architect
• TM1 Data Control
• TM1 Process Objects
• TM1 API.
Figure 1 shows the primary functions of the components and how they relate
to client-server systems.
TM1 Server
A multidimensional database and OLAP engine that stores and provides
access to multidimensional models (cubes) managed in local or remote TM1
Servers.
TM1 Server works with memory-based cubes and its most significant fea-
ture is its memory-resident calculation engine; all OLAP consolidations and
calculations are performed on-the-fly, rather than working with
precalculated cubes stored on disk. The engine can be run as a multi-user
remote server, or from within the TM1 Perspectives and TM1 Architect
clients in local, ‘standalone’ mode.
TM1 Perspectives
A 32-bit client interface, for spreadsheet users, to the TM1 Server. TM1
Perspectives is provided as an add-on to Microsoft Excel and Lotus 1-2-3
spreadsheets. It has three main components:
• TM1 multidimensional database engine, which stores cubes in memory
and performs OLAP calculations on demand
TM1 Client
TM1 Client is an independent version of the Spreadsheet Integration compo-
nent found in TM1 Perspectives. The TM1 Client is an exclusive interface for
spreadsheet users that want to access predefined TM1 cubes and does not
provide the development or administration capabilities of TM1 Perspectives.
TM1 Client can also run against a multi-user remote TM1 Server, or in a
‘standalone’ mode against a local TM1 Server.
TM1 Classic is a 16-bit version that integrates with Excel 5 and Lotus 5
spreadsheets, but does not support a local TM1 Server engine.
TM1 Architect
TM1’s standalone development, deployment and administration tool. TM1
Architect provides the same functionality as TM1 Perspectives, with the
exception of Spreadsheet Integration. It supports the same Server Explorer
GUI, including a functional Cube Viewer and Dimension Subset dialogues
that are designed to aid development. TM1 Architect is also used to manage
applications deployed through third-party clients (OLE DB for OLAP and
Java).
Process Objects
Version 7.1 provides new server-based Process Objects. These objects eclipse
the client-based data import and update facilities by providing server-side
capabilities for handling complex event-driven scheduling. Process Objects
can be linked to the TM1 rules language for conditional trigger of back-end
TM1 Server processes, such as mapping, transformations and creating and
updating TM1 cubes and dimension hierarchies.
TM1 API
The TM1 API is the element that allows native TM1 clients or third-party
applications to interact with the TM1 Server. TM1 supports four APIs:
• API 7.0, which provides public documented access to TM1’s own
language, and all the calls necessary to develop, manage and use TM1
applications. It is available in C++ and Visual Basic
• JavaBean API, which allows third-parties to develop Java-based OLAP
applications. The TM1 Java API contains all the functionality of the C++
API
• OLE DB for OLAP as a data provider. This opens TM1 for access from
any OLE DB for OLAP consumer tool
• API 1.5, which translates applications written to TM1 Server 6.0’s 1.5
API specification into the appropriate 7.0 API call.
Architectural options
Full mid-tier architecture
This is the ‘natural’ architecture for TM1, consisting of TM1 Servers and
clients running TM1 Perspectives, TM1 Architect and/or TM1 Client. The
TM1 Server loads data in memory on the mid-tier server and services
requests for data from TM1 Client programs. If the client is an OLE DB for
OLAP consumer, it uses a MDX parser facility to interpret requests and
return data.
A web server can be introduced to provide access to the TM1 Server using
supported OLE DB for OLAP clients that offer web access.
The OLAP engine and MDDB store run in memory on a mid-tier server,
where all OLAP calculations are carried out. Cubes can be stored persist-
ently on disk, in either a proprietary disk-based structure or any ODBC-
compliant database. This provides the option of storing cubes in relational
tables and working with these tables through relational query tools.
Depending on the configuration, TM1 clients may have exclusive access to a
single local TM1 Server, which acts as a repository for their private data, or
shared access to one or more remote TM1 Servers; the level of access de-
pends on the security group assigned by administrators. TM1 supports
multiple cubes that can be distributed across several servers. Data can also
be replicated from/to another remote satellite server, and updates can be
synchronised back to the source server.
Using TM1
Real-time analytical processing
TM1’s most distinctive feature is its support for ‘real-time analytical process-
ing’ (RAP). This is made possible by its RAM-based OLAP engine (for which
Applix has a patent), which loads and runs the MDDB entirely in memory.
The MDDB can be loaded into memory because all derived values in TM1
are calculated on-demand. This avoids both ‘database explosion’ and the
lengthy pre/recalculation times when loading or updating the database. The
downside is that the derived values take time to calculate – generally, the
time taken is a function of the complexity of the calculations and the depth
of the dimensional hierarchies in the model. Additionally, TM1 uses a com-
pression algorithm to minimise the use of memory. Generally, the source
database stores one number per record that will be input into the MDDB
and requires up to 50–100 bytes per record. TM1 on the other hand stores
one number (plus indices) in approximately 12 bytes. Hence, a TM1 multidi-
mensional database is typically 10–25% the size of the data source.
Performance, in terms of response time, is also enhanced by retaining calcu-
lation results in memory (as long as they are still valid) to support future
requests. This prevents the same calculation from being repeatedly executed
for a cell and can greatly increase performance. TM1 flags calculations as
invalid when values in the cube are modified. The next time a value is
requested, a fresh calculation is performed.
The new Process Objects in version 7.1 remove much of the manual effort,
by adding a layer of automation and management for the data integration
and loading process.
Spreadsheet-based analysis
TM1 Perspectives and TM1 Client are both designed to exploit existing
spreadsheet skills. TM1 is tightly integrated with standard Microsoft Excel
and Lotus 1-2-3 spreadsheets via special add-ons; TM1 commands are
available from a single pull-down menu from within the spreadsheet.
Two main navigational and analysis tools are provided: the Cube Viewer
dialogue and the Dimension dialogue.
Cube Viewer
The Cube Viewer dialogue, shown in Figure 2, allows end users to navigate
through TM1 models from the worksheet.
The Cube Viewer facility represents the structure of the cube and shows the
dimensions that make up the model. Each dimension is presented as a
button; the arrangement of buttons determines a particular ‘perspective’ of
model data that can be sliced into the spreadsheet. Version 7.1 also supports
OLAP functions from an OLE object directly embedded in the spreadsheet.
Dimension dialogue
Double-clicking on any dimension in the Cube Viewer brings up the Dimen-
sion dialogue shown in Figure 3, which allows end users to refine the subset
of what appears in the Cube Viewer by selecting and filtering Dimension’s
members. The advanced settings in the Dimension dialogue provide access
to Dimension edit and query functions, and OLAP functions such as drill-
down and roll-up and query data in the cube.
By using the advanced browser features in TM1, a subset of the data can be
defined that represents a useful ‘perspective’ of the data you might want to
use in the future. This perspective can be saved as either:
• a worksheet, called a ‘slice worksheet’
• a subset of the model data, called a ‘view’.
A view provides slice-and-dice capabilities, but does not support charting or
spreadsheet formatting options.
Slice worksheets are similar to standard worksheets: end users can format
them and add rows, columns or new formulas. Although the slice worksheet
loses its pivot capability, it does allow users to place any cell from any cube
within the worksheet. Slice worksheets remain linked to the TM1 Server.
Therefore, if a number changes in the multidimensional database, that
change is reflected in the worksheet when it is recalculated. Similarly, if the
worksheet user changes a value, it is also changed in the corresponding cell
in the TM1 database, provided they have the correct access privileges.
Figure 3 Dimension
Future enhancements
Version 7.2 of TM1 is expected in the summer of 1999. It will contain two
major enhancements:
• scenario cubes, which support the creation of cubes that are ‘variants’
(scenarios) of other cubes. The scenario cube overlays the source cube and
include changes made to the scenario cube by the cube user. This will
allow users to make changes to the cube without affecting other users.
Any changes can subsequently be committed into the source cube if
desired. The same approach can be used to import large sets of data that
are held in ‘suspense’ and incorporated on demand
• dynamic subsets, which allow dimension subsets to be defined by an
expression, rather than as a list of members. The subsets are dynamic
and automatically synchronise with changes to its underlying data or
metadata. A subset editor will be provided to define expressions and store
them as objects in the server that get re-evaluated when data/metadata
changes.
As part of its web strategy, Applix is working with established partners to
develop web-based analytical applications. Integration with Revelwood’s
SmartSite development environment will be available in the second quarter
of 1999. This will include integration with Microsoft FrontPage for web page
development.
The development of a Java version of Architect is under review.
Commercial background
Company background
History and commercial
Applix is a US company founded in 1983 to develop and market software
applications for the Unix market. In 1986, it introduced Alis, its first office
automation product. Alis was replaced by the Aster*x product, which pro-
vided the technology for Applixware, a suite of real-time decision support
tools. In 1995, Applix acquired Target Systems, a developer of customer
interaction software, and two major business lines were subsequently
formed: Decision Support Systems (DSS) and Customer Interaction Software
(CIS).
All Applix’s products are based on the concept of real-time decision support:
• Applixware, an integrated family of desktop and development tools for
real-time decision support applications
• TM1, a multidimensional OLAP server that runs in memory
• Applix Enterprise, a suite of customer-interaction management systems
including Applix Service (for customer support), Applix HelpDesk (for
internal IT support) and Applix Sales (for sales and marketing support)
• Applix Anyware, software that exploits Java and thin-client computing
for deploying Applixware and Enterprise applications over the Internet.
The original developer of TM1 was TM1 Software (founded in 1984 as
Sinper) – a privately-owned venture specialising in OLAP products. Applix
acquired TM1 Software in 1996 for $11 million. TM1 Software is now an
operating unit of Applix based in Warren, New Jersey, US. TM1 was first
released in 1984 as a single-user, DOS-based multidimensional engine. The
product was completely re-architected for client-server systems in 1989.
Applix is a public company quoted on Nasdaq. Revenues for 1997 dropped
5% to $48.5 million, and the company recorded a net loss of $0.4 million.
Applix employs more than 300 people and is based in Westboro, Massachu-
setts, US, with seven regional offices in North America and subsidiaries in
Europe and Asia-Pacific.
TM1 is sold directly and via channel partners, including systems integrators,
ISVs, consultants, OEMs and more than 100 VARs. Historically, around 95%
of sales were through partners. TM1 has a particularly strong presence in
markets such as banking and telecommunications.
Applix has bundling deals with a number of OLAP vendors. However, a long-
standing licensing agreement with Hyperion Software has ended after
Hyperion’s merger with Arbor Software in September 1998 – Essbase will
now be the server of choice for Hyperion’s analytical applications. Other
major TM1 partners include Comshare, Information Advantage, IBI,
Platinum Technology and JBA.
Customer support
Support
Applix can provide around-the-clock worldwide support for TM1, via
telephone, e-mail, fax and the Web. The company sponsors local, national
and international user groups.
Training
Applix offers public training courses held at local sites or at its headquarters
in Westboro, Massachusetts, US. Onsite training is also available.
Consultancy services
Applix offers performance tuning and site-specific implementation services
for TM1. However, most implementations are done by partners. Consulting
operations are based in the US, France, Germany and the UK, as well as
those supplied by partners worldwide.
Distribution
US
Applix
112 Turnpike Road
Westboro, MA 01581
USA
Tel: +1 508 870 0300
Fax: +1 508 366 0995
Europe
Applix (UK)
114 Middlesex Street
London E1 7HY
UK
Tel: +44 171 426 0915
Fax: +44 171 426 0916
Asia-Pacific
Applix
9 Raffles Place #27-01
Republic Plaza
Singapore 048619
Tel: +65 435 0490
Fax: +65 536 4315
E-mail: applixinfo@applix.com
http://www.applix.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
TM1 uses either Microsoft Excel or Lotus 1-2-3 as a front end. It will there-
fore appeal to experienced spreadsheet users, since it allows for the easy
browsing of models in spreadsheets that can also become reports. OLAP
functions are set up via a menu-driven interface and not directly from the
spreadsheet. The spreadsheet lacks the flexibility to support large distributed
environments and does not support advanced OLAP reporting and data
visualisation functions provided by other dedicated OLAP front ends. How-
ever, integration with a wide range of OLE DB for OLAP tools compensates
for this.
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
TM1 supports two design interfaces for building models:
• TM1 Architect, which provides a number of graphical dialogues for point-
and-click development
• TM1 Perspectives, which uses the spreadsheet as the development
interface.
Visualising the data source
It is possible to view source tables. However, it is not possible to view the
database schema or bring up a sample of data on screen.
Universally available mapping layer
TM1 does not support a universal mapping layer.
Prompts for metadata
TM1 does not provide any automatic prompts for including metadata during
the model-building process.
Time dimension
TM1 does not require a cube to have a special time dimension, but can
recognise time dimensions if defined. Aggregations and other OLAP calcula-
tions over time must be built-in to the model manually.
Annotating the dimensions
There is no support for the annotation of model dimensions.
Default level of a dimension hierarchy
When a model is created, the system default shows only the highest level of
consolidated elements along the title dimensions. However, end users can
create a subset of data for a specific dimension level. This can be saved as a
‘view’ for future access.
Multiple designers
Multiple designers
Other than simple locking mechanisms, there are no special check-out/in
facilities to support multiple designer environments.
Support for versioning
There is no support for versioning files created by TM1.
1 2 3 4 5 6 7 8 9 10
User-definable extensions
TM1 supports a non-procedural rules language for defining custom multidi-
mensional and inter-cube calculations.
Data mining
There is no support for data mining in TM1.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
TM1 does not support its own web client for accessing TM1 Server. The TM1
web strategy is based upon providing a range of capabilities for TM1 applica-
tion deployment and development across the Web from third-party tools using
the OLE DB for OLAP interface. The level of OLAP functionality provided
varies considerably from tool to tool, and most integrations are read-only.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Most model administration tasks are achieved graphically using the Server
Explorer interface. Strong data replication and synchronisation facilities are
provided to manage distributed environments. The introduction of Process
Objects greatly improves the back-end processing and scheduling capabilities
of the tool. Because of its concentration on financial applications, TM1
security goes further than most OLAP products. Access controls can be de-
fined on a per model-, dimension- or cell-level for users, groups or servers.
TM1’s monitoring facilities log all OLAP transactions, but the presentation of
metadata could be improved.
Management of models
Separate management interface
The Server Explorer provides a graphical interface for managing models and
administering local and remote TM1 Servers. Common model administration
tasks are achieved through two windows:
• one presents hierarchical lists of models and dimensions and other
related server objects that are accessible via the TM1 Server
• the other references the properties of the TM1 Server objects.
Security of models
Security controls can be defined for servers, cubes, dimensions and elements
to restrict access to models:
• cube-level security governs overall user access to models; privilege levels
include read/write access, reserve access (provides exclusive rights to the
model until it is released), lock access (means that other users cannot
modify the model, but can access it as read-only)
• element-level security governs access to cells identified by certain
elements
• dimension-level security governs the ability to add, remove and re-order
the elements in a dimension.
Query monitoring
TM1 Server tracks the transactions made by client OLAP requests in an
ASCII log file. It provides information about who made the change, what
model it was made to, when it was made and how certain cell values were
affected.
Management of data
How persistent data is stored (not scored)
TM1 stores data persistently on disk as compressed proprietary files. It also
offers the option of storing data in relational tables.
Only the lowest-level detail (base-level) data for a TM1 multidimensional
model is stored persistently, and is loaded into memory on the server when
requested by end users. All consolidations and calculations are done on-the-
fly and are also stored in memory.
Scheduling of loads/updates
Process Objects provides an activity scheduler for controlling tasks such as
defining and executing ODBC queries for loading and updating cubes from
relational databases or flat file systems.
Event-driven scheduling
Process Objects supports external event-driven scheduling functions.
Failed loads/updates
Data Control provides a ‘key error report’ that provides a list of the key
errors that occurred during the data load/update process and other back-
ground processing tasks.
Distribution of stored data
TM1 Server supports server-to-server replication of data. Replication is bi-
directional and the ability to see or change data in replicated model data is
managed through security assignments. Model metadata (dimensions and
rules) is only replicated from the replication server to the planet servers,
and cannot be changed on the planet servers.
Sparsity
For calculations across sparse dimensions, TM1 uses a sparse consolidation
algorithm that skips over areas of the model that are zero or undefined.
Methods for managing size
TM1 only stores the lowest input elements in the model and does not
precalculate any data. This serves to shrink the size of the MDDB consider-
ably. Additionally, TM1 uses compression technology and algorithms for both
disk and memory storage.
In-memory caching options
TM1 relies on the caching configuration options provided by the Windows
operating system.
Management of users
Multiple users of models with write facilities
TM1 supports concurrent multi-user read/write concurrent access.
User security profiles
User security is maintained by groups. Users can belong to multiple groups.
User profiles are built upon six access levels and can be easily set-up and
maintained graphically.
Query governance
There is no concept of query governance within TM1.
Restricting queries to specified times
There is no support for restricting queries to specified times of the day.
Management of metadata
Controlling visibility of the ‘roadmap’
TM1’s security schemes control overall access to models and metadata.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
There is very little model metadata to synchronise in TM1.
Impact analysis
There is no support for analysing the impact of changes in the data source
on TM1 models.
Metadata audit trail (technical and end users)
TM1 automatically logs database changes, which can be viewed in a detailed
log file. The information logged includes the date and time the transaction
was made, the name of the client, the value changes before and after the
transaction, and elements that identify the cells that have changed.
Access to upstream metadata
There is no access to external metadata sources or repositories.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
TM1 is a MOLAP-only tool.
MOLAP
Trading off load time/size and performance
TM1 loads only the lowest level input elements of a model into the OLAP
engine and does not precalculate and store data during batch loads. All
calculations are done in real time as users request data. Model data is stored
in a very efficient manner, allowing it to be easily loaded into memory.
Subsequent calculations are also stored in memory for enhanced
performance.
Processing
Use of native SQL to speed up data extraction
TM1 uses ODBC to extract data from relational databases.
Distribution of processing
TM1 supports a multi-cube architecture and is able to distribute processing
at multiple ‘regional’ cubes that feed a higher-level ‘consolidation’ cube. The
cubes can be stored and processed across clusters of TM1 Servers.
SMP support
TM1 supports multi-threading and SMP parallelism.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
TM1 is an out-of-the-box OLAP solution, and there is limited scope for devel-
oping custom applications or interfaces. TM1 does not support a visual
development environment, but an API is provided that allows third-party
tools to access TM1 Server functions. Integration with third-party OLAP
development environments is supported via OLE DB for OLAP.
Customisation
Option of a restricted interface
The different versions of the TM1 client tools (Perspectives, Architect and
Client) naturally lend themselves to providing restricted functionality.
However, it is not possible to turn off specific functions for different users.
Applications
Simple web applications
TM1 supports a JavaBean API, which allows OEM-type development of
bespoke Java applications that access TM1 Server.
Development environment
TM1 does not support a visual development environment.
Use of third-party development tools
The TM1 API allows for the development of custom front ends using Visual
Basic, PowerBuilder, Delphi and C++.
Deployment
Platforms
Client
TM1 clients (Perspectives, Architect, Client and Classic) run on Windows 95,
Windows 98 and Windows NT. The TM1 Classic client also runs on Windows
3.1.
TM1 Perspective and TM1 Client require Microsoft Excel (7 and 97) and
Lotus 1-2-3 97 software. TM1 Classic supports Excel 5 and Lotus 5 software.
Server
TM1 Server runs on Windows 95, Windows 98, Windows NT, Unix (AIX,
Solaris and HP-UX) and Linux.
Data access
TM1 can access data from any ODBC-compliant relational database. It can
also access Microsoft SQL Server OLAP Services MDDB.
Standards
TM1 supports Microsoft’s OLE DB for OLAP API as a data provider. Applix
has established a third-party certification programme for its OLE DB for
OLAP partners.
Published benchmarks
In December 1997, Applix published results of the OLAP Council’s APB-1
benchmark for TM1.
Price structure
Pricing for TM1 Server ranges between $28,000 for five concurrent users, to
$110,000 for 100 concurrent users.
Summary
At a glance ............................................................................................. 2
Terminology of the vendor ...................................................................... 3
Ovum’s verdict ....................................................................................... 4
Product overview .................................................................................... 6
Future enhancements .......................................................................... 16
Commercial background
Product evaluation
Deployment
Platforms .............................................................................................. 35
Data access .......................................................................................... 35
Standards ............................................................................................. 35
Published benchmarks ......................................................................... 35
Price structure ...................................................................................... 35
Evaluation: Brio Technology – Brio Enterprise Ovum Evaluates: OLAP
At a glance
Developer
Brio Technology, Palo Alto, CA, USA
Version evaluated
Brio Enterprise, version 6.0
Key facts
• A desktop business intelligence tool that provides query, OLAP analysis
and reporting
• Servers run on Windows NT and Unix; clients run on Windows 3.1,
Windows 95, Windows 98, Windows NT, Macintosh and Unix Motif. Web
access is also provided
• In June 1999, Brio acquired Sqribe Technologies, a provider of web-based
enterprise query and reporting tools; it plans tight integration between
the two product sets
Strengths
• A tightly integrated OLAP suite that is easy to use and deploy
• Sophisticated report distribution via ‘push’ and ‘pull’ web servers
• Strong metadata integration with a range of data warehousing tools
Points to watch
• Not the strongest product set for complex OLAP analysis
• Inconsistent server administration facilities
• Desktop architecture has scalability issues for analysing large datasets
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Analytic application
Brio defines an ‘analytic application’ as a customised solution that addresses
a specific set of analysis requirements for either a horizontal or vertical
market and/or a specific set of business users.
DataCube
A dynamic view of multidimensional data that is fed directly from the
Desktop DataCache. Each Brio report has a DataCube as its basis.
Data model
A visual representation of database tables that provides end users with a
business-oriented view of the data. Data models are saved as part of a
document file and stored in a central repository (typically in a relational
database).
Desktop DataCache
This stores a slice of relational data extracted from a database. The
DataCache is stored in a compressed format on the client and provides the
source data for analysis and reports. The DataCache also supports a local
OLAP engine.
Document
A Brio file that stores data model and query specifications for retrieving data
from a database, as well as the results set and reports created from the
queried data. Documents can be stored locally on the client or remotely in a
file server.
Items
Items are discrete informational attributes of topics (such as customer ID)
and represent the column fields of data in database tables. Items are
organised within topics and are used to query data. Computed items
calculate a fresh value for each original value based on a computation; for
example, revenue calculated from price and units.
Repository
A special set of relational tables that centrally stores data models and
document security settings. The repository information is referenced each
time a document is requested by end users.
Topics
Topics provide a visual representation of tables in the database and are an
element of data models. Topics are organised in logical groupings, which
reflect a particular aspect of the business, such as customers or sales. Each
topic contains a list of items. Meta-topics are custom topics created from
items in other topics. They are used to simplify views of the underlying data
by creating ‘virtual’ tables that are independent of the underlying database.
Ovum’s verdict
What we think
Brio Enterprise is a strong client tool for business intelligence that is both
easy to use and quick to deploy, although its administration facilities could be
improved. It offers a tightly integrated suite of tools that is best suited to
large, geographically dispersed organisations that need to provide users with
easy access to a wide range of data sources from a PC or web browser.
Cross-platform support in conjunction with ease of use makes Brio
Enterprise particularly suitable for deployment across large enterprises. It is
designed to exploit the Internet or corporate intranets. Business intelligence
can easily be deployed across the enterprise via a flexible ‘publish-and-
subscribe’ model using ‘push-and-pull’ servers for report processing and
distribution – although these capabilities come with a substantial price tag.
Brio’s Adaptive Reports capability allows administrators to easily adjust the
functionality of reports to match the diverse needs of users. End users are
provided with an equally flexible set of tools that is suitable for general
business use, rather than complex analytical applications. Visual
development capabilities have also been integrated into the core query, OLAP
and reporting environment to build analytical front-ends. The development
tools adequately support simple application needs, but lack the sophistication
of a fully-fledged development environment.
Brio Enterprise easily connects to a range of multidimensional and relational
data sources via its ‘snap-in’ APIs. It also sets the standard for metadata
integration with data warehouses. This removes the need for Brio to
maintain proprietary semantic layers, and ensures that views of data models
are based on consistent metadata that is shared across the enterprise.
Brio Enterprise’s server administration facilities are mediocre and dilute the
product suite’s overall strength. The product’s client-centric architecture
easily lends itself to the analysis of small parts of the database, which are
automatically loaded onto the desktop for immediate OLAP analysis. While
this is ideal for ease of use and quick deployment, scalability is limited by the
size of the datasets being analysed and the complexity of the OLAP
calculations; optimal performance is gained when working with results sets
of less than 50,000 rows. However, the effective use of metadata means that
users can query large data warehouses and often still work satisfactorily
with results that fit within these criteria.
When to use
Brio Enterprise is suitable if you:
• want an integrated out-of-the-box query, OLAP and reporting solution
that is easy to use and quick to rollout
• are a large, geographically dispersed enterprise that wants to provide
general business users with easy access to data and the ability to develop
simple analytic applications without IS involvement
• want to exploit your corporate intranet for distributing corporate data
• want easy connection to a wide range of data sources
• already have a data warehouse or OLAP server in place and are looking
for a flexible front-end tool that easily connects to it.
Product overview
Components
Brio Enterprise, version 6.0, consists of the following components:
• Brio Enterprise servers – OnDemand Server and Broadcast Server
• BrioQuery end-user tools – Designer, Explorer and Navigator
• web clients – Brio.Insight and Brio.Quickview.
Figure 1 shows the primary functions of the components and whether they
run on the client or the server.
The web clients also use Brio’s Adaptive Reporting technology; both clients
can ‘adapt’ their capabilities based on a combination of the content of each
report and the user’s security profile. These capabilities can be restricted to
simple report browsing or can provide users with full query and analysis
functions.
Brio.Insight
A web-based query, OLAP analysis and reporting tool that is offered as a
plug-in to existing web browsers. It provides a similar level of functionality to
the BrioQuery Navigator tool, including ad hoc query and OLAP analysis.
Brio.Insight works with both of the Brio Enterprise servers; it can
manipulate documents posted by the Broadcast Server and allows ad hoc
querying when used with OnDemand Server.
Brio.Quickview
This is a web-based report browser extension, which allows business users to
access and view portfolios of precomputed and formatted Brio reports; it only
supports the ‘view’ and ‘view and process’ capabilities provided by
Brio.Insight. End users can navigate multiple reports by going through a
series of tabs at the bottom of a Brio document.
When used in conjunction with OnDemand Server, administrators can also
grant end users the right to refresh the data on-demand on a report-by-report
basis, or limit the view based on a set of criteria. Scripts and EIS-tabs can
also be used to guide novice users through a series of reports.
Architectural options
not part of the slice. For example, if users want to limit the view of a cube to
data only for 1998, the member value 1998 from the year dimension is simply
dragged into the Slicer tool.
A Filter Box is also provided for defining limits once levels have been
introduced in reports. The Filter Box allows for the setting of comparison
operators that act on the values for that member (similar in concept to
member selection). Additional server-specific functions are available in the
Filter Box to be included as part of the limit. Each MDDB supports its own
list of functions – representative functions include top N and top N%.
Metadata integration
A key feature of Brio Enterprise is its ability to directly access upstream
metadata from data warehousing tools and a number of third-party metadata
repositories from Informatica, Ardent and others (an exhaustive list is
provided on Brio’s website).
Access to third-party metadata eliminates the need to define and maintain
proprietary metadata within the Brio environment. As the tools source
metadata dynamically at runtime from the underlying data sources, any
changes to the associated metadata are automatically updated to the query
tool at runtime.
If a metadata source is available and stored in relational tables, Brio
Explorer and Brio Designer can use the Open Metadata Interpreter (OMI)
facility to link it to data models and apply metadata information
automatically. The OMI is a feature of Brio’s Open Catalogue, which manages
database connectivity through a graphical connection interface, as shown in
Figure 4. The interface provides several tabs for adjusting connection
preferences and accessing table, column, join, lookups and remarks
metadata.
Snap-in metadata templates are included and available to users via the Meta
Connection wizard. These templates provide the definitions required for the
‘Remarks’ interface and are fully customisable.
A ‘Remarks’ dialog, as shown in Figure 5, on a topic or item may have
separate tabs displaying the definition of a column, the last update of the
table, the number of rows in the table, transformation rules applied to the
data and the source of the column. Administrators can define as many
different elements of metadata to display as required.
Future enhancements
The next release of Brio Enterprise will focus on integrating the software
tools acquired from Sqribe. The release of Brio Enterprise version 7.0 is
planned for the first half of 2000 and is expected to deliver a full integration
between the two product sets. This will include a central user repository and
enhanced and fully integrated administration tools for the servers. In
particular, the transaction-oriented capabilities of Sqribe’s ReportMart
enterprise information portal architecture will be integrated within Brio
Enterprise’s web environment.
For the core Brio Enterprise suite, OLE support, both as a consumer and
provider (server), will also be provided. Connection to Oracle Express is
expected in the second half of 1999 – after version 2.0 of the OLAP Council’s
MDAPI is made publicly available.
Brio is also planning more sophisticated data visualisation tools and will
move the product to a much thinner client architecture. It will also provide
snap-in metadata capabilities to the Microsoft and Platinum metadata
repositories.
Brio aims to deliver a number of vertically focused analytic applications; its
subsidiary company, MerlinSoft (which Brio acquired in 1998), is considering
vertical niche opportunities.
As part of the company’s Private Label Partner programme, Brio expects an
increasing number of vertical analytic applications ‘powered’ by Brio
technology to be developed by partners, such as Broadbase.
Commercial background
Company background
Customer support
Support
International around-the-clock support (GlobalPlus) is available via
telephone, e-mail or the Web. Web support is particularly strong and includes
the ability to monitor and control support requests internally and from
distributors. Support centres are located in the US, the UK, France and
Australia. Annual maintenance and support contracts range from 15–25% of
the licence fee.
Training
Brio offers a range of public and private on-site courses for all its products;
one or two-day classes are available for casual end users, power users and
administrators. Brio has a number of certified training partners and also
provides computer-based training packages.
Consultancy services
Consulting services are provided by Brio’s Expert to Expert group. A typical
engagement involves working closely with data warehousing projects to
provide advice on model design, metadata integration, connectivity and
implementation. Brio also maintains numerous referral partnerships with
external consultancies.
Distribution
US
Brio Technology
3460 West Bayshore Road
Palo Alto
CA 94303
USA
Tel: +1 650 856 8000
Fax: +1 650 856 8020
Asia-Pacific
Brio Technology
Suite A, Level 10
121 Walker Street
North Sydney, NSW 2060
Australia
Tel: +61 2 9964 9533
Fax: +61 2 9964 9755
E-mail: info@brio.com
http://www.brio.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
All the end-user tools have functional and extremely user-friendly interfaces to
support general business analysis. The tools offer varying levels of
sophistication and users can easily navigate between query, reporting and
analysis using report ‘tabs’. The optional semantic layer speeds up queries by
shielding users from the complexities of SQL and the database schema. All the
tools support advanced WYSIWYG report construction and sophisticated
groupware facilities, using scheduled agents and ‘push-and-pull’ servers to
enable easy distribution across the enterprise. Wizards are provided to help
users through complex tasks.
BrioQuery is a capable tool, which suits the needs of small departments.
However, there is little reason to use Explorer or Navigator, given the
additional benefits provided by the server-based web tools.
Publishing a report
The ‘report bursting’ option in Broadcast Server allows users to publish a
single query and have the result tailored for, and delivered to, different
people or divisions within the enterprise. Results data can be managed and
distributed according to different rules and criteria. End users can also
publish HTML reports for web access, using an HTML wizard and report
templates. Web users also benefit from adaptive reports.
Targeted distribution via e-mail
Broadcast Server integrates with Microsoft Exchange and other MAPI or
SMTP-based e-mail systems. E-mail can be used to distribute documents or
for notification purposes (for example, to include a URL reference for an
HTML report), but dynamic distribution lists are not supported.
Subscribing to reports
Brio Enterprise does not provide any direct support for report subscription.
Summary
1 2 3 4 5 6 7 8 9 10
The process of building a business model is split between creating the data
model by mapping dimensions to database tables and columns, and then
using the model to query and analyse data. Easy-to-use graphical tools are
provided for both processes. However, model designers are expected to have a
good understanding of the underlying table and join structures – here the tools
would benefit from some wizard support.
The functions provided are geared towards general business modelling, rather
than building highly complex models – a single Brio query can only access
data from a single database. Designers are also restricted by the lack of
complex calculated measures that can be defined in models.
Basic design
Design interface
BrioQuery Explorer and Designer support a graphical workspace for building
data models. Modelling tasks, such as mapping topics and items to database
tables, columns and defining joins, are achieved by point-and-click.
Visualising the data source
The table catalogue provides a graphical display of the source tables, columns
and data types. When tables are moved to the design workspace, the
relationships (joins) between them are graphically shown. It is also possible
to bring up a detail view of sample data.
Universally available mapping layer
A ‘master’ data model can be developed to provide an initial mapping layer,
upon which subsequent query and analysis is based. The master data model
provides a visual representation of the database, using familiar business
terms that cannot be changed by users. Any query section that refers to the
‘master’ data model will automatically inherit changes made to the main
data model.
Prompts for metadata
When loading a data model into the repository, end users are prompted to
include metadata information, such as model type, author and a textual
description.
part of a data model. The computed item is a value, variable, logic statement
or formula that instructs the Brio client or the RDBMS to perform a certain
calculation. Standard arithmetic and logical operators can be used to create
computed items, either by typing into a formula panel or via point-and-click.
Scalar functions, which calculate and substitute a new data value for each
value associated with a data item, are also supported, although some are
provided by the RDBMS.
Support for multiple measures with a set of dimensions
Up to 20 measures can be associated with a set of dimensions.
Multiple designers
Multiple designers
There is no special support for multi-designer environments.
Support for versioning
The repository provides a central, version-controlled database store of data
models.
Summary
1 2 3 4 5 6 7 8 9 10
Financial functions
There are no special financial functions.
Statistical models
The Brio client tools support basic statistical functions, such as median,
mode, percentile, standard deviation and variance. More advanced statistical
functions are only available through the Oracle RDBMS.
Trend analysis
There are no special functions for analysing trends.
Simple regression
Regression functions for forecasting are not available.
Time-series forecasting
There is no support for advanced time-series forecasting methods.
User-definable extensions
Users can build or extend functions by using the existing arithmetic, logical
and scalar functions and by exploiting native database functions through the
SQL. JavaScript has been included as a function language within documents
– JavaScript expressions can be written to define more complex computed
columns in reports. But these functions can only be re-used within a single
document – there is no scope for storing them in a central repository for
wider re-use.
Data mining
Brio Enterprise does not support data mining.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
The concept of a ‘master’ data model lends itself to the centralised
deployment and management for data models.
The management of data models in the repository is carried out graphically
using the BrioQuery Designer interface. The OnDemand and Broadcast
Servers share a graphical administrator interface for defining end-user
security and managing system-level settings.
Security of models
The security of models relies on the underlying database security schemes.
Brio Enterprise focuses security on the document repository and the
distribution layer.
Query monitoring
A SQL log monitors all queries, including all SQL statements generated and
usage activity (such as the number of rows returned). The SQL shown can be
edited to optimise the performance of a frequently requested query.
Additionally, an auditing feature allows administrators to collect usage
statistics about data models stored in the repository. The information can
include how long queries take to process and which tables and columns are
used most often.
Management of data
How persistent data is stored (not scored)
Persistent data is stored in document files either in the repository (which can
be a relational database), in a remote file server or locally on the client.
Scheduling of loads/updates
The loading of data into the data warehouse is beyond the scope of Brio
Enterprise. Broadcast Server provides graphical scheduling tools for periodic
and batch-style processing and data refreshes. Schedules can be based on
timed intervals (ASAP, daily, weekly, monthly, quarterly or custom) or user-
defined events, and are supported by e-mail notifications.
Event-driven scheduling
Event-based scheduling is supported through the polling of data sources. The
Broadcast Server can be triggered to refresh reports based on external
events, such as the completion of an update to a data warehouse or a business
exception rule.
Failed loads/updates
All report processing activity, including errors and failed refreshes, is
monitored and logged in a job repository. Administrators and end users can
be notified (via e-mail or pager) upon the completion or failure of a report
processing activity. However, there are no automatic retry functions.
OnDemand Server does support failover. If processing fails on one server, it is
automatically re-submitted to another server in the cluster.
Distribution of stored data
Documents can be stored centrally in a repository (database server), on a
remote file server or locally (on the client machine).
Management of users
Multiple users of models with write facilities
Brio Enterprise does not support direct multi-user write-back facilities to the
database.
User security profiles
Defining user and group-level security is maintained through a graphical
tree-like interface. Interaction levels and group assignments can be easily
administered via point-and-click.
Query governance
Query governors are available to set controls on the time a query takes to
perform and the number of (unique) rows returned to the client.
Restricting queries to specified times
There is no support for restricting end-user queries to particular times of the
day, but a query-sizing feature is provided to query the database and to show
how many records a query will retrieve – this is useful for testing a
questionable query and postponing processing of large results sets during
peak network periods.
Management of metadata
Controlling visibility of the ‘roadmap’
Brio Enterprise relies on model and user security assignments to control the
visibility of metadata.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Brio data models are quite adaptable to change. New dimensions and
measures can easily be added to data models. Adaptability is greatly
enhanced by the OMI feature, which dynamically maps existing metadata
from a range of back-end data sources. This ensures that the data source, data
models and metadata are kept synchronised. Data models stored in the
central repository are automatically updated to reflect changes in the source
database. However, support for impact analysis and metadata audit trails is
not provided.
Metadata
Synchronising model and model metadata
The OMI feature ensures that changes in upstream metadata remain
synchronised with data models. However, descriptive metadata input when
loading a data model remains unaffected and must be manually updated.
Impact analysis
There is no support to inform the administrator of the effect on documents
and reports when there is a change in the structure of the data warehouse.
Metadata audit trail (technical and end users)
An audit trail showing changes to the history of the metadata is not
available. Generally, the auditing capabilities provided relate to usage of
data models.
Access to upstream metadata
Brio’s Open Metadata Interpreter (OMI) reads and interprets existing
metadata from most of the leading data warehousing tools. The OMI link
enables model designers to view extraction and transformation metadata, as
well as descriptive information and naming conventions about tables and
columns in the data warehouse to help them build data models. Metadata is
propagated so that Brio reports that run against old metadata can update
themselves (although some user intervention is usually required).
Integration is provided with a variety of data warehousing tools, including
Informatica’s PowerMart Suite, HP Intelligent Warehouse, IBM Visual
Warehouse, Ardent DataStage, Broadbase, Logic Works Universal Directory,
Carleton Passport, Pine Cone and Prism Warehouse Manager. Snap-in and
customisable metadata templates are also provided for several leading
metadata vendors.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
Multipass SQL is not supported.
Options for SQL processing
Processing can be carried out on the database server or the Brio server,
depending on where the calculations are defined. For example, measures
added to the query section will be processed by the RDBMS and measures
added to the results section will be performed by the Brio server.
Brio Enterprise also supports native DB2, Teradata and Red Brick functions
for more sophisticated processing on database servers.
Speeding up end-user data access
The DataCache can be stored in a document and subsequently retrieved for
optimal query performance. The DataCache is time-stamped to show the
currency of the data.
Aggregate navigator
Brio Enterprise relies on aggregate awareness implemented in the target
database. There is no native aggregate navigation provided by the Brio tools.
MOLAP
Brio Enterprise is not a MOLAP tool.
Processing
Use of native SQL to speed up data extraction
Native access is supported for Oracle, Sybase (Adaptive Server), Red Brick
and Informix (Dynamic Server and MetaCube). Brio Enterprise uses ODBC
to access other relational data sources.
Distribution of processing
A cluster of OnDemand Servers can be configured and multiple queries can
be routed across the servers for simple load balancing. Each cluster is
comprised of a manager and one or more nodes – the distribution is ‘round
robin’ across all the active nodes in the cluster. Load balancing can also be
achieved for Broadcast Servers, although this requires significant
programming.
SMP support
The OnDemand Server is multi-threaded and supports SMP parallelism.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of a restricted interface
The Brio clients are essentially the same program, but with different
features disabled, based on their intended target audience. The Adaptive
Reports feature offers users five levels of interactivity, depending on the
user’s profile and the document’s profile.
Ease of producing EIS-style reports
The EIS tab section, included in BrioQuery Designer, supports a
development environment for building graphical front-ends and electronic
dashboards using a combination of drag-and-drop user interface controls and
JavaScript. Layout tools are provided for embedding objects such as bar
charts, hot spots, graphics or ‘top seller’ lists on a screen to be viewed by
high-level users. Data on the screen is live and updated regularly.
Applications
Simple web applications
Simple reporting applications can be developed for the web clients by using
the development and JavaScript scripting facilities provided.
Development environment
The EIS tab section provides graphical and scripting tools for development.
Report components and user application objects can easily be assembled on
screen using drag-and-drop. A number of standard user interface controls
(radio buttons, check boxes, list boxes and so on) are provided. A scripting
language (JavaScript) is available to manipulate Brio application objects and
build functionality into the application. The scripting editor supports basic
debugging and testing facilities.
The development environment is object-based rather than object-oriented –
which lends itself to simplicity and ease of maintenance.
Deployment
Platforms
Client
BrioQuery clients (Designer, Navigator and Explorer) run on Windows 3.1,
95, 98 and NT workstation, Apple Macintosh and Unix Motif.
Brio.Insight and Brio.Quickview run on Microsoft and Netscape web
browsers using their plug-in APIs.
Server
The Brio Enterprise servers (Broadcast Server and OnDemand Server) run
on Windows NT and Unix (HP-UX, Solaris and AIX). OnDemand Server
works with a variety of web servers (including Apache web servers on Unix)
via ISAPI, NSAPI and CGI.
Data access
Brio Enterprise provides native access to Oracle, Sybase, Red Brick and
Informix database servers. ODBC access is provided for other relational
databases, including IBM DB2, Teradata, Microsoft SQL Server,
QueryObjects (CrossZ) and White Cross.
It can also connect to third-party OLAP servers. Native access is provided for
Hyperion Essbase and IBM DB2 OLAP Server (both accessed via the
GridAPI), Informix MetaCube and SAP BW. OLE DB for OLAP access is also
provided to connect to Microsoft SQL Server 7.0, OLAP Services, NCR
TeraCubes, SAS, WhiteLight and Applix TM1.
Additionally, Brio Enterprise can also be accessed from knowledge
repositories, such as Sqribe (ReportMart) and VIT (MetaWarehouse). Access
to SAP R/3 is provided by Acta Technology’s RapidMarts for SAP. Integration
with other ERP systems is planned.
Standards
Brio Enterprise supports Microsoft’s OLE DB for OLAP as a consumer.
Published benchmarks
Brio Enterprise has conducted its own internal benchmarking tests for
OnDemand Server – the results will be published by the end of 1999.
Price structure
Pricing for Brio Enterprise Server is $32,495 for Windows NT and $44,995
for Unix systems. Both editions include OnDemand Server and Broadcast
Server, Brio Enterprise Administrator and ten named-user licences for the
Brio.Quickview web client.
All server and client components are also separately available; OnDemand
Server costs $19,995 for Windows NT and $29,995 for Unix; Broadcast
Server costs $14,995 for Windows NT and $19,995 for Unix. Pricing for
individual Brio client tools ranges from $50 for Brio.Quickview, up to $3,995
for BrioQuery Designer.
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 5
Future enhancements ........................................................................... 11
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 26
Data access .......................................................................................... 26
Standards .............................................................................................. 26
Published benchmarks .......................................................................... 26
Price structure ....................................................................................... 26
Evaluation: Business Objects – BusinessObjects Ovum Evaluates: OLAP
At a glance
Developer
Business Objects, twin headquarters in Paris, France and San Jose, USA
Versions evaluated
BusinessObjects version 4.0, comprising of the BusinessObjects user module,
BusinessObjects Designer, BusinessObjects Supervisor, Document Agent
Server, BusinessQuery and BusinessMiner; and WebIntelligence II version
2.0
Key facts
• A client-based tool that provides OLAP query, analysis and reporting
• Runs as a Windows 3.1. Windows 95, Windows NT, Unix Motif or Java
client; servers run on Windows NT and Unix
• Business Objects has re-architected its product lines to support a 32-bit
component environment, and has made significant enhancements for web
access
Strengths
• An easy-to-use ‘out-of-the-box’ tool
• Flexible OLAP query, analysis and reporting available via the Web
• Graphical set of IS tools for creating the mapping layer and deploying
OLAP across the enterprise
Points to watch
• The mid-tier Document Agent Sever component lacks the capabilities of a
full OLAP engine
• Potential scalability issues for large result sets and complex OLAP
calculations
• Little support for custom application development
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Ovum’s verdict
What we think
Building on its established strength of providing flexible access to corporate
data, the most distinguishing feature of BusinessObjects is its ease of use.
The tool provides ‘out-of-the-box’ functionality for the masses of general
business users, rather than analysts seeking high analytical functionality, or
customers seeking to develop specialised OLAP applications.
The architecture supports a well designed mapping layer that shields end
users from the complexities of the underlying data sources. This provides
easy and flexible access to corporate data using familiar business terms. The
‘dynamic’ nature of BusinessObjects’ multidimensional models means that
users can easily direct queries to additional data during analysis in an ad
hoc way. The BusinessObjects graphical designer and administration tools
have also been implemented with an emphasis on usability, and specifically
address the problems of rolling out BusinessObjects to large numbers of
users. Business Objects has made significant strides in the area of web
enablement. Its WebIntelligence II product provides one of the strongest web
interfaces in the OLAP market, and is closely integrated with the
BusinessObjects tools in terms of end-user functionality and infrastructure.
The main challenge for Business Objects is to build on its early success in
the low-end business intelligence market to capture share in an ‘enterprise’
space that demands high performance and scalability. The BusinessObjects’
mid-tier component provides a range of report processing services for sched-
uling and distribution. But it does not constitute a full mid-tier OLAP server.
Customers with queries that return large data sets, and require complex
OLAP calculations must therefore integrate with third-party OLAP engines,
or license specialist technology to support this capability.
When to use
BusinessObjects is suitable if you:
• want ‘out-of-the-box’ functionality
• want to empower hundreds of general business users with query,
analysis, reporting and data mining via an integrated interface
• want to support ad hoc queries to a range of data sources via a web
browser
• require quick and easy deployment across the enterprise.
It is less suitable if you:
• want to provide advanced analysis and forecasting functions, without
having to integrate with third-party technology
• want to build highly customised OLAP applications
• intend to build large, dimensionally complex business models that
require extensive OLAP calculations.
Product overview
Components
BusinessObjects is comprised of the following end-user tools and IS tools:
• BusinessObjects user module version 4.0
• Designer version 4.0
• Supervisor version 4.0
• Document Agent Server version 2.0
• WebIntelligence II version 2.0
• BusinessQuery version 4.0
• BusinessMiner version 4.0.
Figure 1 shows the primary functions of the components and whether they
run on the client or the server.
BusinessObjects is a client-based tool that provides ad hoc OLAP query,
analysis and reporting capabilities from a PC. A web configuration pushes
most of the processing to a mid-tier application server. BusinessObjects
works directly against relational data warehouses or datamarts, and data
providers enable access to non-SQL sources such as ERP applications and
third-party multidimensional databases. BusinessObjects provides a central
repository for metadata definitions, models and reports. The tool also sup-
ports a Visual Basic-like scripting language called ReportScript for
customisation.
BusinessQuery
Designer
A graphical DBA tool for defining classes of business objects (equivalent to
dimensions of a business model) that map to the source database; Business
Objects refers to this mapping as its ‘semantic layer’ (for which it holds a
patent). The semantic layer allows end users to view data using familiar
business terms.
Every BusinessObjects client holds a local copy of the mapping layer. How-
ever, to simplify distribution and administration it is also possible to store it
centrally in a repository within the database, from where it is available to all
end users.
Supervisor
A graphical administration tool for managing end users and system re-
sources. An object-based security model lets administrators assign and
modify the rights granted to groups of end users. User profiles include access
privileges to the mapping layer, reports and individual menu functions.
Additionally, the size of the result of a query, or the query execution time,
can be limited for end users.
WebIntelligence II
Enables query, OLAP analysis and reporting functions from a Java-enabled
web browser. WebIntelligence II includes an object request broker for the
server and a Java applet for the browser.
A web query panel is used to create new reports on-the-fly; this is
downloaded to the web browser as a Java applet, and includes a copy of the
mapping layer. Web users are provided with a range of report formatting
options and can also add their own simple calculations to report data. The
interface is based on a personal user homepage, from which end users can
access personal and shared documents, or targeted reports can be sent to an
‘inbox’ via the Document Agent Server.
BusinessQuery (optional)
A Microsoft Excel spreadsheet add-in that lets end users pull data directly
into Excel spreadsheets using the terminology from the BusinessObjects
semantic layer.
BusinessMiner (optional)
An end-user tool that provides data mining facilities. It uses decision-tree
algorithms to graphically depict hierarchical relationships in data.
Architectural options
Full mid-tier architecture
WebIntelligence II is a thin client implementation of BusinessObjects that
extends the architecture to incorporate a more powerful application server
that comes closer to the functions expected of a ‘full’ mid-tier OLAP server.
Significantly, this configuration moves most of the BusinessObjects code
from the client to the WebIntelligence II application server.
WebIntelligence II is based on the use of Java applets for ad hoc queries. By
downloading a web panel and a local copy of the mapping layer onto the web
client, users are provided with a similar level of functionality to the
BusinessObjects desktop client. WebIntelligence II runs as a set of centrally
managed software components. It has a distributed component architecture
(DCA) that supports multiple copies of the server components across differ-
ent web servers. DCA is implemented using Corba technology licensed from
Visigenic Software.
Desktop architecture
The ‘natural’ BusinessObjects client-server architecture is a two-tier desktop
architecture, based on a full BusinessObjects client that interfaces with one
or more relational databases. BusinessObjects supports a number of differ-
Mobile architecture
The BusinessObjects desktop architecture ‘naturally’ lends itself to a mobile
architecture. While disconnected, users are restricted to the data stored in
the downloaded microcube.
Using BusinessObjects
Main concepts
It is easier to understand the BusinessObjects approach to OLAP if three
principal concepts – semantic layer, business objects and universes – are
clarified first.
The idea of providing users with a way to refer to corporate data in business
terms lies at the heart of BusinessObjects. It is achieved by using the seman-
tic layer, a centrally defined and controlled model of the underlying data-
base. This provides a mapping layer that allows end users to view data using
familiar business terms called business objects (approximately equivalent to
dimensions or measures in OLAP terms).
There are no restrictions on the way users combine business objects to
create their queries. Business objects are ‘semantically dynamic’, which
means that they retain their meaning in whatever combination they are
used. For example, the object ‘sales revenue’ will report the correct amount
whether used in conjunction with customer or product. The SQL statements
needed to retrieve the data are automatically generated by BusinessObjects,
with no awareness required on the part of the user.
Business objects are organised in classes; for example, the class ‘customer’
might consist of different business objects, such as age, group or sex. Uni-
verses consist of different classes and objects. Typically, different users use
different universes for different purposes. Multiple universes, such as sales,
personnel or inventory, can be created to meet the needs of different groups
of users.
BusinessObjects Designer
Page 1
Query Panel
Requested information
Available
Information
Conditions
Page 1
Future enhancements
The next major release of BusinessObjects is due in early 1999. The new
version will provide a number of enhancements, including:
• a toolkit for writing custom data providers
• a set of Active X controls for custom application development
• integration with a wider range of metadata sources, including the
Microsoft repository.
Commercial background
Company background
History and commercial
Business Objects was founded in 1990 by two former Oracle employees in
Paris, France. The company was modelled on the venture capital funded
technology start-ups in the US. It attracted investments from venture
capitalists in Silicon Valley and Europe, including the founding shareholders
of National Semiconductor and Oracle.
BusinessObjects was first released in 1991, and has since sold more than
870,000 licences worldwide. In 1994, 25% of the company was floated on the
Nasdaq. The flotation raised more than $30 million in capital. In 1996,
Business Objects decided to re-architect its product to support a 32-bit
component-based architecture. However, delays in the introduction of a
stable version 4.0 of the product caused a financial loss in the third quarter
of 1996. Business Objects has now recovered from this hiccup and, with a
major product transition now behind it, has returned to financial growth and
stability. Business Objects’ revenues for fiscal 1997 grew 34% to $114.3
million.
Business Objects has joint headquarters in Paris, France and San Jose,
California, US. The company employs 800 people and has additional offices
in North America, Europe and Asia-Pacific.
Customer support
Support
Business Objects offers a multi-tiered help-desk support system at a corpo-
rate and a field level. Support is provided via telephone hot-line and the
Web.
Training
Education services are available in several languages both in-house and on-
site. Public courses are run frequently for end users, designers, administra-
tors and supervisors. Computer-based training is also available.
Consultancy services
Business Objects’ consultants are mainly relational database specialists and
are available to provide advice and development support on all aspects of
product implementation. However, no significant portion of revenues is
attributed to consulting, except in the UK where it accounts for 18% of
revenue. Consulting projects include requirements for data access and the
analysis of the relational database schema. Business Objects also has nu-
merous consulting and referral partners.
Distribution
US
Business Objects
2870 Zanker Road
San Jose
CA 95134
USA
Tel: +(1) 408 953 6000
Fax: +(1) 408 973 1057
Europe
Business Objects
1, Square Chaptal
92300 Levallois-Perret
France
Tel: +(33) 1 41 25 21 21
Fax: +(33) 1 41 25 31 00
Asia-Pacific
Business Objects Australia
Suite 210
283 Alfred Street North
North Sydney
NSW 2060
Australia
Tel: +(61) 2 9922 3049
Fax: +(61) 2 9922 3069
http://www.businessobjects.com
E-mail: info@businessobjects.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
The process of building a business model is split between DBAs (who create
the mapping layer) and end users (who create reports by querying the data-
base using the mapping layer). Easy-to-use graphical tools are provided for
both types of user. The design tools provide extensive wizard support, and
DBAs can readily exploit existing database schemas. Multi-designer environ-
ments are also well supported by concurrency and versioning controls.
Basic design
Design interface
The BusinessObjects Designer module provides a graphical interface for
designing the mapping layer (the meta-model) and creating universes (end-
user perspectives on the meta-model). The interface has a standard
Microsoft Office 97 look-and-feel. A Quick Design Wizard is provided to
guide developers through each step in the process, and includes facilities for
design checking.
Models are built using queries, and viewed and analysed in a report. Users
simply drag-and-dropping important business objects in a familiar Microsoft
Office-style interface to retrieve data. The query functions are also sup-
ported by wizards.
Visualising the data source
A graphical view of the database schema is provided. Sample data for a
particular table or column can also be viewed on-screen.
Universally available mapping layer
BusinessObjects supports a mapping layer, which it calls the ‘semantic
layer’. The universe defines a particular type of mapping for groups of end
users.
Prompts for metadata
Developers and end users are not automatically prompted to provide addi-
tional metadata when creating the mapping layer or building models. All
metadata inputs are optional.
Multiple designers
Multiple designers
The Designer tool supports a centralised metadata repository with check-
out/check-in facilities and concurrency controls; it provides locks on the
mapping layer so that only one designer can modify a universe at a time.
Support for versioning
The metadata repository uses delta versioning.
1 2 3 4 5 6 7 8 9 10
Statistical models
There are no statistical modelling facilities directly provided by
BusinessObjects; this is supported via integration with SPSS.
Trend analysis
BusinessObjects supports simple period-on-period analysis. Apart from this
there is no direct support for advanced trend analysis based on exponential
smoothing or curve-fitting techniques.
Simple regression
BusinessObjects does not support any regression forecasting functions.
Time series analysis forecasting
BusinessObjects does not support advanced time series forecasting
algorithms.
User-definable extensions
A variable and formula editor is available for creating simple user-defined
analytical functions. Scripting is also available.
Data mining
BusinessMiner is a data mining tool that can be fully integrated into
BusinessObjects; it uses the same semantic and security layer as
BusinessObjects and is accessed as an option on the BusinessObjects menu.
BusinessMiner is client-based and uses decision-tree technology developed
by Isoft to graphically depict relationships in data.
BusinessMiner is suitable for general business users; the emphasis of the
tool is clearly on ease-of-use, rather than on advanced data mining
algorithms.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
The management of reports and users is via the Supervisor interface. The
mapping layer is managed using the Designer module. Both tools share a
similar graphical interface, and most tasks are achieved using point-and-
click.
Security of models
Administrators can set multi-level security controls for reports. Read-edit
access can be specified on individual reports.
Query monitoring
The BusinessObjects client-server tools do not provide any graphical facili-
ties to monitor queries; though this can be set up using event-driven scripts.
WebIntelligence II, however, does provide an audit trail facility for tracking
queries.
Management of data
How persistent data is stored (not scored)
Data is stored in the document domain of the repository as a document file.
The file contains the report definition and one or more microcubes (models).
Data can also be stored locally on the client or a separate file server.
Scheduling of loads/updates
With Document Agent Server, users can schedule time-specific updates on
an hourly, daily, monthly or custom interval basis.
Event-driven scheduling
Event-driven scheduling in Document Agent Server can be achieved through
the use of scripts that check a specified environment variable before execut-
ing a schedule.
Failed loads/updates
Failed loads automatically produce an error message and log file. Scripts can
be set up to e-mail error messages to DBAs. Document Agent Server can
reprocess failed updates, and administrators can specify the number of re-
submission attempts.
Distribution of stored data
Data can be stored on the client, in the repository or on a file server.
Sparsity (only for persistent models)
Because only the lowest level of detail needed is stored in a microcube, there
is no requirement for sparsity handling in BusinessObjects.
Methods for managing size
No limits are imposed on the size of the target model. The size of the
microcube is restricted only by the time taken to download it to the client.
In memory caching options
Facilities are provided for tuning the cache.
Informing the user when stored data was last uploaded
All reports specify when the model was last refreshed.
Management of users
Multiple users of models with write facilities
BusinessObjects does not support a write-back capability.
User security profiles
An object-based security model is supported for users. Administrators can
grant access rights to individuals or groups of users (including nested
groups). Several user profiles are provided, and custom profiles can also be
set up. Complex security hierarchies can be set up and displayed graphically;
users ‘inherit’ security attributes and access rights from ascendant groups.
Query governance
Controls can be set that restrict the size of the result of a query; for example,
the maximum number of rows returned by a query, and limit the maximum
‘fetch’ time for a query. Additional controls can be set on:
• elements of SQL syntax generated by queries (such as nested ‘select’
commands and operator functions)
• the use of certain business objects in a query
• access to specific rows in a database table.
Restricting queries to specified times
Users and certain types of queries can be restricted to certain times of the
day.
Management of metadata
Controlling visibility of the ‘road map’
BusinessObjects’ ‘universe domains’ are used to restrict the visibility of the
metadata model for specific users or groups of users. These domains provide
users with a controlled view of the mapping layer and data operations they
can access.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
There are facilities to keep reports and the mapping layer synchronised.
Impact analysis
There is no support for impact analysis.
Metadata audit trail (technical and end users)
WebIntelligence II provides audit trail facilities for administrators. But
these facilities are not yet supported by the client-server tools.
Access to upstream metadata
BusinessObjects can access metadata created by data warehousing tools
such as Informatica, Prism Solutions and Carleton Apertus. This metadata
can be mapped to a universe schema.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
BusinessObjects automatically generates multipass SQL.
Options for SQL processing
SQL processing is done in the database server.
Speeding up end-user data access
Microcubes are cached on the server in an optimised format for queries.
These microcubes can be directly accessed to speed up end-user access times.
Aggregate navigator
BusinessObjects can use aggregate tables in the database. If DBAs aggre-
gate data in the target database at multiple levels (day, week and month),
BusinessObjects automatically selects the highest level of aggregation that
satisfies the query.
MOLAP
Trading off load time/size and performance
BusinessObjects does not provide its own persistent MDDB store.
Microcubes are created ‘on-the-fly’ as small multidimensional data struc-
tures and do store pre-calculated aggregate data. However, BusinessObjects
can integrate with third-party MDDBs.
Processing
Use of native SQL to speed up data extraction
The BusinessObjects query engine supports native SQL and ODBC access to
databases.
Distribution of processing
Depending on the architecture implemented, processing can be done on the
client (desktop architecture) or the server (Document Agent Server or
WebIntelligence II).
SMP support
If the target database supports SMP, Windows 95 and Windows NT,
BusinessObjects clients can take advantage of this. The WebIntelligence II
application server supports shared processing.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
Certain aspects of the BusinessObjects user interface can be modified to
restrict functionality.
Ease of producing EIS-style reports
A scripting language (called ReportScript) can be used to create custom EIS
reporting systems. Scripting is a two-part process: creating a visual inter-
face; and defining the actions to be taken from the interface.
Applications
Simple web applications
There are no tools provided to develop web applications.
Development environment
BusinessObjects does not support a graphical development environment. An
internal scripting language, called ReportScript, does allow developers to
design screen layouts and define program logic to launch custom
BusinessObjects reports and queries, as well as other desktop applications.
ReportScript is based on Visual Basic, and includes standard editor, compiler
and debugging facilities.
Use of third-party development tools
BusinessObjects supports OLE automation (client and server). This allows
BusinessObjects functions to be called from Windows development tools,
such as Microsoft Visual Basic and Visual C++.
Deployment
Platforms
Client
BusinessObjects clients run on Windows 3.1, Windows 95, Windows NT and
Unix Motif. WebIntelligence II supports web browser which supports Java.
Server
Document Agent Server runs on Windows 95, Windows NT and Unix.
WebIntelligence II application server runs on Windows NT.
Data access
BusinessObjects has native drivers for all the major relational databases.
ODBC access is also provided. It also provides pre-packaged data providers
for a number of non-SQL sources, including spreadsheet data, multidimen-
sional databases (Microsoft SQL Server OLAP Services, Hyperion Solutions’
Essbase, Oracle Express and Informix MetaCube) and external applications;
rapid deployment templates are provided for SAP, Oracle, PeopleSoft and
Baan applications.
Standards
BusinessObjects provides a published API. It also supports Microsoft’s OLE
DB for OLAP (as a consumer) Oracle Express’ SNAPI and Hyperion Solu-
tions’ Essbase API.
Published benchmarks
BusinessObjects does not have any published benchmarks.
Price structure
The standard BusinessObjects query and reporting modules are priced at
$595 each. BusinessObjects Explorer and Analyzer modules cost $695 each.
The BusinessObjects Supervisor and Designer tools each cost $1,995.
WebIntelligence II is priced at $595 per user; the WebIntelligence II Ex-
plorer modules cost an additional $395 per user. Document Agent Server is
priced at $7,995 for the Windows NT version and $15,995 for the Unix
version. BuisnessMiner is priced at $495 per user.
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 5
Future enhancements ........................................................................... 11
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 28
Data access .......................................................................................... 28
Standards .............................................................................................. 28
Published benchmarks .......................................................................... 28
Price structure ....................................................................................... 28
At a glance
Developer
Cognos, Ottawa, Canada
Versions evaluated
PowerPlay and PowerPlay Web version 6
Key facts
• A desktop tool for multidimensional analysis
• Runs on Windows 95 and NT. Optionally, processing can be carried out on
a Unix server (AIX, HP-UX or Sun Solaris)
• Cognos also produces Aristotle, a client tool specifically designed for
access to Microsoft SQL Server 7 OLAP
Strengths
• An easy-to-use ‘out-of-the-box’ tool with no programming requirements
• Analytical and data mining functionality can be added via other Cognos
tools
• Good performance tuning for a desktop tool
Points to watch
• Accessing data from relational databases to build the model requires the
use of Impromptu, Cognos’s report writing tool
• Little support for specialised analytical analysis
• No web access to detailed data
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Terminology of the vendor
Categories
The Cognos term for members of a dimension.
Impromptu
Impromptu is Cognos’s report and query tool. Although it is possible to
import data from RDBMSs without the use of Impromptu, in practice it is
usually used for this.
Impromptu is also used to display the detailed data.
Impromptu catalogue
The business view of the data that is produced in Impromptu Administrator.
This is available to developers via the Impromptu icon within Transformer.
Impromptu queries
PowerPlay can access relational data sources, but the definition of the data
to be retrieved is done in Impromptu and saved as a ‘query’. A query defini-
tion is thus the SQL definition and metadata to enable Transformer to
execute the query.
Model
The model is the specification from which PowerCubes are generated. The
model is usually designed by IT staff.
The end user works with PowerCubes (data stored in a multidimensional
format) rather than the model (a design specification).
Portfolio
This organises PowerPlay reports into briefing books for EIS use.
PowerCube
Cognos’s storage format for multidimensional data. PowerCubes can be
stored as files or in a relational database if extra management and security
features are required.
Ovum’s verdict
What we think
PowerPlay used to be described as ‘an easy-to-use desktop tool’, but this is
now only one of its several aspects. When used in this mode, it is differenti-
ated from its competitors primarily by its integration with other modules
from Cognos – thereby potentially offering an impressive range of data
mining and forecasting features.
While it is most frequently used in a client-centric configuration, it also
provides a mid-tier engine.
PowerPlay now offers a compromise between an end-user tool and a MDDB.
It offers better datastore facilities than most desktop OLAP products, and
better end-user facilities than most OLAP servers. It is an ideal tool if you do
not know how large or specialised your requirements will become, because it
allows you to introduce a specialised OLAP server if one is needed.
The increasing functionality provided by PowerCubes – particularly the
partitioning options – offers some of the performance benefits of a MDDB.
Cognos will have to decide whether to continue to enhance the datastore
features and compete directly with the OLAP servers offered by Arbor,
Oracle and Microsoft, or whether to concentrate on strengthening the end-
user functionality and assume that the tool will be used to access a third-
party OLAP database.
When to use
Cognos’s PowerPlay should be on your shortlist if you:
• want something that runs out-of-the-box
• want to separate the roles of model builder and model user
• already use Impromptu for reports
• want integration with desktop data mining and forecasting tools
• want to access ERP (particularly SAP) data.
It is less suitable if you wish:
• to build models with more than 500,000 unique members
• to develop customised applications
• to avoid buying two tools, because Impromptu is essential to efficiently-
built models based on relational data
• to use complex analytics.
Product overview
Components
The main components from Cognos that support OLAP are:
• PowerPlay version 6.0
• Impromptu version 5.0
• PowerPlay Server Web Edition version 6.0
• Scenario version 2.0
• 4Thought version 4.0.
The main focus in this evaluation is on PowerPlay and PowerPlay Server
Web Edition.
Figure 1 shows whether the component runs on a client or a mid-tier server,
the stage at which it is typically used and its primary function.
PowerPlay
PowerPlay provides support for the OLAP aspect of the business intelligence
spectrum. It is used to build multidimensional models and uses these and
third-party OLAP models built using Arbor’s Essbase, Oracle Express, DB2
OLAP and OLE DB for OLAP providers.
This component is well integrated with Impromptu and, in practice, even if
the main requirement is for OLAP, both tools would be used. There are two
reasons for this: when drilling down to the detail, Impromptu is required
and, if data in relational databases is required in the PowerCube, it is im-
ported via an Impromptu query.
Other components that are installed as part of PowerPlay, but have separate
icons on the desktop are listed below.
Transformer
Transformer mode is used to build multidimensional models from Im-
promptu queries (this is the means of accessing relational databases as
described above), ASCII files, Excel, Lotus, dBase, Paradox and FoxPro.
Main purpose
Impromptu
Impromptu is primarily a report writing tool, but reports specified in it can
also be used as data sources in PowerPlay. This indirect method is used to
incorporate data from relational databases and ERP sources such as SAP,
Baan, PeopleSoft and Oracle Financials into multidimensional models
defined in PowerPlay.
Impromptu comes in two editions:
• the Administration edition for creating a catalogue – giving a business
view of the data sources (that is, metadata)
• the Enterprise User edition for creating reports.
Architectural options
PowerPlay is usually described as a desktop OLAP tool, but it can also be
configured so that the processing to build and manipulate the multidimen-
sional model is carried out on a mid-tier Unix server (HP/UX, IBM/AIX or
Sun Solaris).
Light mid-tier
PowerPlay supports two forms of light mid-tier architecture (in which the
processing is done on a mid-tier server, but there is no MDDB):
• using a Unix server for the generation and scheduling of PowerCubes
• using the PowerPlay Server Web Edition.
When a Unix mid-tier server is used, the design of the model is done on the
desktop using the Transformer component of PowerPlay and then passed to
the mid-tier server for building. The model can then be accessed in the usual
way using PowerPlay on the client.
PowerPlay Server Web Edition is used to access, but not build, models. As
with most OLAP tools, it uses a four-tier architecture in which the
PowerPlay Web Server uses CGI to communicate with the web server and
thus enables the generation of HTML pages.
Desktop architecture
This is the ‘natural’ architectural configuration for PowerPlay. The multidi-
mensional model is designed and built on the desktop PC. The PowerCube
can be stored locally or centrally and can, optionally, be stored in an RDBMS
if database management features are required.
Mobile architecture
PowerPlay supports a mobile architecture, in which OLAP processing can
continue when the links to external data and processing sources are severed.
It is simply achieved by copying the PowerCube onto a PC with PowerPlay
installed.
Using PowerPlay
Need for Impromptu to access data to build the models
One feature that is unusual in PowerPlay is the way in which some data for
the model is accessed. To use data from an RDBMS or a specialised data
source (such as SAP or Oracle Financials) in a model, a ‘query definition’ has
to be defined in Impromptu to extract the data. This query definition then
appears as a data source in the PowerPlay interface. Thus, Impromptu must
be installed and used when the multidimensional model is built.
The data is then stored within the PowerCube, so it is not necessary to have
Impromptu when the model is being accessed.
Out-of-the-box ease-of-use
PowerPlay is an easy-to-use, out-of-the-box tool. Version 6 makes use of
multiple frames, as shown in Figure 4. This shows how the navigation frame
on the left enables the user to get an overall picture of the model.
Partitioning options
In early versions of PowerPlay only the detailed data was held in the
PowerCube and all aggregates were calculated on-the-fly. Versions 5 and 6
offer partitioning (manual or automatic). Partitioning is a process in which a
potentially large model is divided into a number of partitions, or nested sub-
cubes. The partitions contain pre-calculated aggregates on some dimensions.
The effect of partitioning is to increase the size of the cube but potentially to
improve performance. Load time is traded off against end-user access time
by the design decisions made when setting up partitions.
PowerCubes can be designed so that the user can navigate easily from one to
another if they share dimensions.
Future enhancements
Version 6.5 of PowerPlay is scheduled for release in the first half of 1999.
Cognos intends to include the following enhancements:
• the provision of a multi-server back end. This is expected to support load
balancing, write-back from 4Thought to PowerCubes and greater support
for PowerPlay Web Reports
• support for the remote installation of the client version of PowerPlay
• support for WAN deployment of the client version of PowerPlay
• the addition of extra features to the PowerPlay Server Web Edition to
support Java clients, drill through to Impromptu Web Reports and a
common log-in for all web products.
Commercial background
Company background
History and commercial
Cognos was established in 1969 and is based in Ottawa, Canada. It was
originally a consulting company and developed into a single product com-
pany selling PowerHouse, its 4GL for mid-range systems. In the late 1980s,
it broadened its portfolio from straight application development tools into
data analysis and reporting, launching PowerPlay in 1990 and Impromptu a
year later.
In the early 1990s, the company lost momentum when the switch towards
client-server systems reduced demand for PowerHouse and the emphasis
within the company moved from its 4GL product to desktop business intelli-
gence tools.
It has extended the range of its business intelligence desktop tools through
acquisitions. In 1995, it licensed data mining software from Angoss Software
in Toronto, which emerged as Cognos’s data mining product, Scenario, in
1997. Also in 1997, the company acquired 4Thought, a forecasting tool using
neural networks, from Right Information Systems, a UK-based company, for
$8 million. Another 1997 acquisition, Interweave Software, provided the
basis for the web versions of Impromptu. In early 1998, Cognos licensed an
end-user tool, code named Aristotle, from Panorama, the Israeli company
that originally developed Microsoft SQL Server 7 OLAP Services. Microsoft
claims that this does not give Cognos any advantage over other ISVs be-
cause the Aristotle team at Panorama post-dated the OLAP Services devel-
opments.
Cognos has reported good revenue growth and profitability for several years.
Revenue for the financial year ending February 1998 was $244 million, with
a profit of $33 million (or $50 million excluding the cost of acquisitions). In
the previous year revenue was $198 million and profit was $36 million.
Customer support
Support
Hotline support is available, typically at a cost of 20–25% of the licence fee.
Training
Training is offered at Cognos sites worldwide as well as on-site.
Consultancy services
Cognos offers consultancy directly and via its partners. The company runs
the Cognos Certified Professional Programme for product specialists.
Distribution
Headquarters
Cognos
3755 Riverside Drive
PO Box 9707, Station T
Ottawa, ON
Canada K1G 4K9
Tel: +1 613 738 1440
Fax: +1 613 738 0002
US
One Burlington Business Centre
67 South Bedford St
Suite 200W
Burlington, MA
USA
Tel: +1 781 229 6600
Fax: +1 781 229 9844
Europe
Cognos
Westerly Point
Market Street
Bracknell
Berkshire
RG12 1QB
UK
Tel: +44 1344 486668
Fax: +44 1344 485124
Asia-Pacific
Cognos
110 Pacific Highway
Third Floor
St Leonards
NSW 2065
Australia
Tel: +61 2 9437 6655
Fax: +61 2 9438 1641
http://www.cognos.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
The strength of the tool lies in its ease-of-use in defining the structure of a
multidimensional model and populating it with data. There is automatic
support for defining the time dimension and quickly producing a prototype if
the data source is appropriately structured. Using Impromptu to access SQL
data prevents a sample of data being available to the designer.
Basic design
Design interface
The design interface is easy-to-use and exclusively point-and-click.
Visualising the data source
It is not possible to display a sample of data from the data source/s to be
used in the model.
Universally available mapping layer
There is no direct support for a universally available mapping layer. Some
support is provided by the use of re-usable Impromptu queries.
Prompts for metadata
The developer is not automatically prompted but can, optionally, provide a
description for dimensions, measures and members in the model. This is
accessible to end users.
Access to upstream metadata
There is no integration to access metadata generated by extraction or data-
base tools in the preparation of the data.
Multiple designers
Multiple designers
Support for multiple designers is only an issue if a mid-tier server is used, in
which case only one client is allowed to access models on the server. This
ensures that there are no lost updates.
Support for versioning
There is no direct support for versioning.
1 2 3 4 5 6 7 8 9 10
User-definable extensions
There are facilities for users to extend the analytical capabilities by writing
their own functions using CognosScript, a scripting language similar to
Visual Basic. PowerPlay provides a development environment for this, which
includes a debugger. The language exposes the dimensions, measures and
members, either by name or using an index.
Data mining
This is not provided by PowerPlay, but is the main focus of Scenario.
1 2 3 4 5 6 7 8 9 10
Management
Summary
1 2 3 4 5 6 7 8 9 10
There are two main editions of PowerPlay, (client- and server-based). In both
editions, user and PowerCube securities can be defined. The management
features of the server edition are designed to support a large number of users.
The score here reflects the good support given to the management of users. In
version 6, the features for performance tunability of the data have been
substantially enhanced, but to fully exploit these the administrator has to
manually control the partitioning features.
Management of models
Separate management interface
The administrator uses the Transformer interface of PowerPlay running on
the client to manage the design and build process.
Security of models
Security is controlled through the Authenticator document, which is usually
stored on the mid-tier server. The location of this is specified within each
PowerCube. It enables the administrator to define privileges for users and
PowerCubes.
Query monitoring
Query monitoring is not supported in the client version. However, PowerPlay
Web Administrator provides status and performance information that can be
used to balance web processing over multiple servers. Information given
includes the number of requests received in the last minute and the average
time to process requests.
Management of data
How persistent data is stored (not scored)
Persistent data is stored in PowerCubes. PowerCubes can be stored locally,
on a mid-tier server or in an RDBMS (for additional security and manage-
ment features).
In early versions of PowerPlay, only the detailed data was held in the
PowerCube, and all aggregates were calculated on-the-fly. Versions 5 and 6
offer partitioning (manual or automatic), which adds partitions to the cube
that generates pre-calculated aggregates on some dimensions. The effect of
partitioning is to increase the size of the cube but improve performance. The
size of cubes is reduced by compression.
PowerCubes can be designed so that the user can navigate easily from one to
another if they share dimensions.
Scheduling of loads/updates
Scheduling can be controlled either from the desktop or from the mid-tier
server. If this is done from the desktop, a separate Scheduler module is
available to schedule refreshing of the PowerCube. With this module, a
schedule can be defined using point-and-click. If scheduling is centralised on
the server, it uses the Unix scheduling utilities of cron and crontab.
Using the scripting language, multiple cubes can share a refresh schedule.
Event-driven scheduling
Event-driven scheduling can be defined using the scripting language from
within the scheduler in the client version.
Failed loads/updates
When a PowerCube is built, information is created in a log file, including the
time taken, number of records processed and whether the processing was
successful or not. If a load is partially successful, a checkpoint is created so
the load job can start from this point rather than the beginning. If a job fails,
the user is informed next time Transformer is opened.
Distribution of stored data
PowerCubes can be stored wherever the user wishes.
Sparsity (only for persistent models)
Transformer, by default, assumes that the combinations of dimensions in a
model are sparse. Sparsity only becomes an issue when pre-calculated
consolidated aggregates are also stored and is not a concern unless partition-
ing is used to do this (that is, the basic PowerCube does not include consoli-
dations). The handling of sparsity is therefore dependent upon the design of
the partitions.
Methods for managing size
A basic PowerCube contains only detailed data and no aggregates. In
PowerPlay, size is a function of the amount and design of partitioning.
Without partitioning, records are stored in the PowerCube and rolled up at
runtime by the PowerPlay client to provide summary values. Potentially,
millions of records could be involved in this. Partitioning reduces the
summarisation required at runtime by writing consolidated records to a
partition, and leaving lower level detail records in a separate partition.
In general, the effect of partitioning is to trade-off end-user access time
against increased build time. Cognos claims that partitioning may increase
size by 50–100%, but speeds up retrieval time between ten and 100 times.
In memory caching options
The cache size used in the client when building and executing models can be
set through the ‘Cognos.ini’ file. It is a manual process, requiring adjustment
if there appears to be heavy disk activity or the building of PowerCubes
takes longer than expected.
Informing the user when stored data was last uploaded
As described in the End-user criteria, reports can (optionally) include the
refresh date in their title. PowerCubes viewed in Explorer do inform the
user when the stored data was last uploaded.
Management of users
Multiple users of models with write facilities
Not applicable.
User security profiles
There are a number of ways of controlling security, including password
access to PowerCubes, definition of ‘user classes’ (that is, group), security
profiles, and the use of the RDBMS (if the PowerCubes are saved in a data-
base) and network security. The file containing the user class definition
information is encrypted.
Query governance
There are no query governance features, apart from the time restriction.
Restricting queries to specified times
Users can be restricted to particular times for accessing PowerCubes.
Management of metadata
Controlling visibility of the ‘road map’
Users are not aware of PowerCubes to which they have no access rights.
Different users can have different views on the same PowerCube.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
There is no mechanism to ensure that when changes are made to the dimen-
sions and measures that the descriptive metadata is synchronised.
Impact analysis
There is no support for impact analysis.
Metadata audit trail (technical and end users)
There is little metadata to audit.
Access to upstream metadata
There is no support for access to upstream metadata.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
The original design of the PowerCube gave little scope for performance
tunability, but this has changed in recent versions. The combination of break-
ing the model into separate cubes that are linked so users can drill through
from one to another, and partitioning, give significant tuning options. Parti-
tioning gives the administrator the option of trading off build time against
execution time. The incremental update enables new data to be appended to
the PowerCube rather than recreating the whole model.
In the commonly used client-centric configuration, the PowerCube is loaded
down to the desktop; the speed of the local processing is a function of the
desktop hardware and the settings for the in-memory cache.
ROLAP
Not applicable, because the data for the model is retrieved from a pre-built
multidimensional PowerCube.
MOLAP
Trading off load time/size and performance
This is achieved using partitioning. Without partitioning, records are stored
in the PowerCube and rolled up at runtime by the PowerPlay client to
provide summary values. Potentially, millions of records might be involved
in this, leading to reduced end-user performance. Partitioning reduces the
summarisation required at runtime by writing consolidated records to a
partition, and leaving lower level detail records in a separate partition.
In general, the effect of partitioning is to trade off end-user access time
against increased build time. Partitioning may increases the size by 50–
100%, but speeds up retrieval time between ten and 100 times.
There is no wizard support to help in deciding what to consolidate; this is a
manual design decision.
Processing
Use of native SQL to speed up data extraction
Data for the PowerCubes is retrieved using Impromptu query definitions.
Impromptu uses native SQL for all the major databases including Oracle,
Informix, Microsoft SQL Server, Sybase, CA-Ingres and MDI DB2 gateway.
Distribution of processing
There is no support for the distribution of processing.
SMP support
The application uses multi-threading and can thus take advantage of SMP.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
There is no option for the user to access the model via a restricted interface.
Ease of producing EIS-style reports
This is provided by the portfolio, a module provided with both Impromptu
and PowerPlay, which provides an EIS environment to give the simplicity of
a ‘button-driven’ application.
Applications
Simple web applications
There is no direct support to develop applications specifically for the Web.
The web browser is used to access PowerCubes developed via the desktop or
server.
Development environment
There is no OLAP-specific development environment.
Use of third party development tools
PowerPlay offers OLE automation and thus reports, briefing books and other
PowerPlay objects can be embedded in applications written using an OLE-
compliant language.
Deployment
Platforms
If used in a client-server configuration with PowerCube generation and
scheduling carried out on the server, the available server platforms are HP-
UX 9.04 and 10.x, AIX version 4.1, and Sun Solaris 2.4 (SunOS 5.4).
Data access
The sources that can be used to build a model include Impromptu files
(which gives access to RDBMS and ERP data), comma delimited files, and
personal data sources such as dBase, Excel and Foxpro. PowerPlay can
directly access Microsoft Access databases.
Impromptu uses native SQL for all the major databases, including Oracle,
Informix, Microsoft SQL Server, Sybase, CA-Ingres and the MDI DB2 gate-
way.
Standards
PowerPlay has an OLE DB for OLAP consumer interface.
Price structure
If purchased separately, Impromptu (End User Edition), PowerPlay Client
and Scenario cost $700 per user. When purchased as a bundle, the cost is
$1,300 per user.
Impromptu Web Query and PowerPlay Server Web Edition cost $500 per
user (when purchased for 100 users) and $255 per user (when purchased for
1,000 users). Internet access licensing is also available. Contact Cognos for
details.
PowerPlay Administrator costs $2,000 (only one is required).
Impromptu Administrators Edition costs $900 per user.
Published Benchmarks
There are no published benchmarks for PowerPlay.
Gentia Millenium
Applications Platform
Summary
At a glance ............................................................................................. 2
Terminology of the vendor ...................................................................... 3
Ovum’s verdict ....................................................................................... 4
Product overview .................................................................................... 5
Future enhancements .......................................................................... 14
Commercial background
Product evaluation
Deployment
Platforms .............................................................................................. 31
Data access .......................................................................................... 31
Standards ............................................................................................. 31
Published benchmarks ......................................................................... 31
Price structure ...................................................................................... 31
Evaluation: Gentia Software – Gentia Millenium Applications Platform Ovum Evaluates: OLAP
At a glance
Developer
Gentia Software, London (UK)
Versions evaluated
Gentia Millennium Applications Platform (G-MAP), version 5.0.2
Key facts
• A development environment for analytical OLAP applications with an MDDB
• Server runs on Windows NT and leading Unix flavours; clients run on
Windows 95, 98, NT, Macintosh (PPC) and Sun Solaris. Web support is
also provided
• Gentia’s main analytical application is the Renaissance Balanced
Scorecard for implementing and tracking organisational strategies
Strengths
• Powerful development environment with specialist OLAP objects and
functions
• Supports a wide range of distributed architectures
• Gentia also provides packaged analytical applications for enterprise
performance management
Points to watch
• Limited ‘out-of-the-box’ OLAP functionality
• Developers may find learning and using the core visual development
environment difficult initially
• The Gentia MDDB cannot be accessed by third-party front-end tools
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Base models
These are physical multidimensional cubes that contain dimensions and
measures.
Book
A Gentia application is a number of pages organised into chapters and placed
in a book. A page is effectively a screen of information that is likely to contain
a set of visual objects for screen building, database access and analytical
functionality, and a multidimensional business model, which can be
interactively explored.
GDL
Gentia Development Language. A scripting language with procedural
control, used to define management tasks and to extend the flexibility of the
application development environment.
Item
The term used for measures.
Multicube architecture
GentiaDB uses a multicube architecture to minimise the size explosion
caused by combining sparse dimensions. In effect, the logical
multidimensional business model is built using a number of separate cubes,
each of which has dimensions that are dense with regard to each other.
Scenario
A virtual multidimensional cube that can be created from SQL tables and
other data stores. A scenario can be stored locally for personal offline use or
on a server for shared workgroup access.
Smart Agent
Manages the automation and delegation of common and complex processing
tasks within a Gentia application.
Warehouse
An organisational element used to group application details together and set
security restrictions on them. Each warehouse can contain one or more
books. Also known as an object store.
Ovum’s verdict
What we think
Gentia’s core competency is providing a complete development platform for
building and deploying analytical applications in a heterogeneous and
distributed environment. The Gentia Visual Development Environment
(VDE) is well equipped to cope with a range of application development needs
for medium-to-large sized organisations. The new Application Framework
extends application development to end users, but complex development will
require significant programming skills and IT involvement.
Gentia G-MAP is, however, a less out-of-the-box OLAP application than those
offered by other MDDB vendors – principally because of the lack of front-end
tools to access the database. While, in theory, the GentiaDB
multidimensional database could be used directly by end users, the original
decision of the company to base the API on the OLAP Council’s first
specification means that the MDDB cannot be accessed by third-party OLAP
tools. Gentia is well aware of this limitation and has already adopted the
OLE DB for OLAP standard as a consumer provider support is planned for
version 6.0.
The core application development product is sound, but users that require
greater sophistication and complexity in their applications may find learning
and using the VDE difficult initially. A sophisticated development
environment, such as Gentia will require significant training. We are also
concerned about the long-term viability of a small company offering a product
that needs a significant amount of customisation, when the entry of Microsoft
will hasten the move towards a commodity market. To the company’s credit,
it recognises the threat and has repositioned itself as a solutions-oriented
company, with the delivery of applications such as the Renaissance Balanced
Scorecard and the Impact range, designed to implement organisational
strategies and track performance.
When to use
The Gentia suite of products should be considered if you:
• have a distributed and heterogeneous IT environment
• require strategic enterprise management applications, such as balanced
scorecard, performance measurement and activity-based costing
• want to build highly specialised OLAP applications and have the
necessary development skills in-house to create them.
It is unsuitable if you:
• simply require an out-of-the-box OLAP business intelligence application
with minimal customisation
• want to use a range of third-party OLAP clients, as well as front-end tools
developed in-house
• intend to build large models with more than one million unique members
and flexible time dimensionality built-in.
Product overview
Components
The main components of the Gentia Millennium Applications Platform
(G-MAP), version 5.0.2 are:
• Visual Development Environment (VDE)
• Application Framework
• Open Network Architecture
• GentiaDB
• Gentia WebSuite
• Gentia Excel Add-in.
Figure 1 shows whether the component runs on a client or a mid-tier server,
the stage at which it is typically used and its primary function.
Application Designer
Application Designer is a ‘personalised’ extension to the core Gentia VDE,
which allows business end users to build their own OLAP applications. It
provides a template-driven approach for building applications and
multidimensional models and designing reports using Excel. Users have
access to a number of specialised business templates or can choose to build
them from scratch.
The templates contain dimensions with business rules and users can select
and use predefined templates via Application Designer’s drag-and-drop
environment. Model structures can be imported from simple external file
structures and updates scheduled.
Application Framework
The Application Framework is a layer on top of the core Gentia VDE and
Application Designer components, which provides a further set of re-usable
components to speed up the development of applications. The components
include templates, pages, menus, toolbars, status information,
administration, navigation, online help and user preferences. The
Application Framework also includes an Object Library that contains
illustrative samples of code.
A Gentia User View facility is also provided, to allow all the components to be
customised.
GentiaDB
The GentiaDB component of the Gentia Platform for Analytical Applications
is a multidimensional database; it uses a multicube approach to cope with
the sparsity issue. Each cube is made up of dimensions that are densely
related to each other. If there is a sparse relationship (for example, sale price
is not related to customer or location), then it is put in a separate cube. Thus,
the sparse, large model is a view and joins are performed on-the-fly to
produce slices of the view as required.
To minimise the time taken to load fresh data into the database, the process
of consolidation is carried out with minimal re-calculations. When a new
value is added, rather than re-calculate all its derivatives, using the
metadata, the system calculates which values will change as a result of this
and re-calculates this minimal set.
The GentiaDB also provides a central metadata store for dimension and
measure definitions. Base models (the equivalent of physical OLAP cubes)
are created using these definitions. In addition, virtual cubes can be created
dynamically by joining base models (called join models) or can be created
from SQL tables and other data stores (called scenarios).
Gentia WebSuite
This is a web extension to the Gentia environment that provides full
application functionality over the Internet. Using the Gentia WebSuite,
messages from the browser are relayed to an Internet server (for example,
Microsoft’s IIS), which then passes them to the WebSuite, which in turn acts
as an interface to the Gentia application server and interprets the message.
WebSuite offers three interfaces:
• View, for carrying out ad hoc query and OLAP-type operations, such as
drill-down, and then submits them to the web server to generate new
pages
• Report, which provides report formatting and distribution facilities
• Data Entry, for writing back to the database for budgeting applications.
Architectural options
The Gentia toolset is unique among the tools we have evaluated in that it can
support all four architectural options. The Object Request Broker allows for
location transparency and, when designing the application, the user allocates
processing to the server or the client.
Desktop architecture
As described for the light mid-tier architecture, there is flexibility about
where the processing takes place. By configuring the system so that
processing is carried out on the PC, a desktop architecture is supported,
although restrictions will apply on the size of the dataset that can be
analysed.
Mobile architecture
The small footprint of the client (approximately 8Mb) means that it is
possible to run an application on a laptop computer. Mobile users can
download predefined or ad hoc sections of data, by selecting pages from
books, and then analyse them offline. Changes made to the data are
automatically published to other workgroup members upon reconnection to
the network server.
Using G-MAP
attributes is inherited from a base builder object, but these may be modified
through the Object Inspector.
As in a Visual Basic development environment, the graphical interface is
designed and then code is attached to it.
Objects are linked to data using the Connections Mapper, which defines the
locations of data sources and associates the objects with a business model in
the GentiaDB.
In most cases, the base-level and Application Framework objects provided
will satisfy most application needs. However, Gentia also provides the Gentia
Development Language (GDL) to provide greater sophistication in
applications. GDL supports additional features, such as event handling,
dynamic SQL and an interface to the Object Store.
Future enhancements
Version 6.0.1 of G-MAP is scheduled for general release in the fourth quarter
of 1999. The major planned enhancements include:
• OLE DB for OLAP provider support − any OLAP end-user tools complying
with this standard will be able to access the GentiaDB. Gentia is
partnering Simba Technologies to embed the SimbaProvider OLE DB for
OLAP products within Gentia
• extended thin-client options − users will be able to access data via any
Microsoft and Java web browser and a range of Citrix-based devices
• new data administration components − for controlling and monitoring
back-end data loading and processing tasks in the Gentia environment
• consumer and provider support for OLE and ActiveX objects − allowing
developers to embed external objects (for example, mapping and ERP
transaction objects) within the Application Framework and to embed
Gentia objects into third-party applications
• a page-build API − allowing end users to customise application pages on-
the-fly. Links will also be provided to the Application Designer for access
to a range of predefined templates
• support for VBScript and JavaScript − this will be added to complement
the GDL scripting environment
• enhanced capabilities for the GentiaDB − including advanced calculations,
alternative and multiple time dimensions, selective disablement of
consolidation of hierarchies and loading of data at different levels
• a revised SQL architecture − Gentia will use Merant’s DataDirect and
SequeLink drivers and manager to provide enhanced access to relational
databases, using ODBC, OLE DB, JDBC and native SQL drivers. The
drivers will be extended to include non-SQL sources such as Lotus Notes
databases.
Gentia is currently evaluating plans to optionally embed the K.wiz
datamining (acquired from Compression Sciences) components into G-MAP
and existing analytical applications. K.wiz will also provide the focus for the
development of new applications for the e-commerce sector, specifically
basket analysis and fraud detection.
Gentia Software is also expanding its suite of packaged analytical
applications. Two additional applications will be delivered in the second half
of 1999:
• Budget Impact, for budgeting and forecasting analysis. Gentia is currently
seeking a domain partner for development
• Traffic Optimizer, a network traffic analysis application for telcos, which
is being developed in partnership with Bell Atlantic and Hewlett-Packard.
Additional reseller partnerships for the RBSC application are also expected.
Commercial background
Company background
Customer support
Support
Global 24×7 telephone hotline support is available; this is based in Atlanta,
Georgia, US with second line support provided by Ipswich, UK. Support is
also available via the Web and online product enhancement and fault
reporting facilities are provided.
Support and maintenance is priced at 20% of the annual licence fee.
Training
Gentia runs a variety of public or on-site courses to support customers,
including a four-day introductory course, an advanced four-day course and
shorter courses on specialised topics such as Gentia Agents.
Consultancy services
Almost all purchasers of Gentia products buy consultancy of some kind. Most
of it is provided by accredited global consultancies (Arthur Andersen, CAP
Gemini and PricewaterhouseCoopers) and Technical and Computer
Management Services (TCMS), which Gentia acquired in May 1998.
Services accounted for 45% of Gentia’s revenues in 1998.
Distribution
Gentia has headquarters located in London (UK) and Boston (US).
Europe
Gentia Software
Tuition House
St George’s Road
Wimbledon
London, SW19 4EU
UK
Tel:+44 181 971 4000
Fax:+44 181 944 1604
US
Gentia Software
201 Edgewater Drive, Suite 241
Boston, MA 01880
USA
Tel:+1 781 224 0750
Fax:+1 781 224 4340
Asia-Pacific
Gentia Software Singapore
89 Science Park Drive
#04-06/07 The Rutherford
Singapore Science Park
Singapore 11826
Tel:+65 778 1678
Fax:+65 778 6884
E-mail: info@gentia.com
http//:www.gentia.com
Product evaluation
End-user functionality
1 2 3 4 5 6 7 8 9 10
Summary
Gentia is primarily an application development environment, so the end-user
functionality largely depends on what the developer builds into the
application. However, the Application Designer and Excel Add-in tools
provide a greater degree of out-of-the-box functionality for end users.
Regardless of the tool used, all the Gentia applications provide the usual
OLAP functions of drill-down and pivot. Distribution is supported by
publishing reports using a ‘book, chapter, page’ metaphor for users and
workgroups. However, the product would be enhanced by a wider range of
front-end user tools and better support for subscribing to reports.
1 2 3 4 5 6 7 8 9 10
Summary
In the Gentia toolkit, the multidimensional business model is built in
GentiaDB, but, because of the lack of end-user tools to directly access it, it will
almost always be embedded in a Gentia application. The multidimensional
business model in the MDDB can be accessed through its API, but the
company claims this is not often requested or required. Although much of the
specification is done using point-and-click, it is less easy to use than other
MDDBs we have seen. The Application Designer provides a simpler interface
for end users to define and populate models, although they are typically less
complex. Areas that could be strengthened include support for more
specialised calculated measures and flexible time dimensions, and the ability
to capture and use metadata during the design process.
Basic design
Design interface
The design interface for dimension, member and measure definitions is
largely via point-and-click.
Visualising the data source
The developer can see both the schema and a sample of data.
Universally available mapping layer
There is some support for this within GentiaDB, through the use of shared
metadata.
Prompts for metadata
There are no prompts to add metadata.
Time dimension
There is support for defining time periods and spans, which can be defined
dynamically (for example, year-to-date). There is no support for alternative
time dimensions and limited support for defining multiple time dimensions
in a single model.
Annotating the dimensions
The use of ‘display sets’ allows designers to add descriptive comments to
dimensions.
Default level of a dimension hierarchy
This cannot be specified within GentiaDB; it has to be specified using
dynamically populated filters for each user or group.
Multiple designers
Multiple designers
Once the model has been designed, it is ‘committed’, meaning that the data is
loaded. GentiaDB supports ‘all or nothing’ consolidation to ensure that
updates are not lost.
Support for versioning
GentiaDB provides version control.
1 2 3 4 5 6 7 8 9 10
Summary
Analytical functions can be built into the multidimensional business model
when it is defined in the GentiaDB (or third-party OLAP source), and others
added when the model is used within an application created in the Gentia
VDE. A few analytical functions are provided within the MDDB and the
application development environment. The company’s philosophy behind the
product is that if complex analytics are required by users, they could be
created as re-usable components using the GDL.
The toolkit would be enhanced by a greater range of ready-to-use analytical
functions, particularly for time-based calculations, and access to external
functions.
User-definable extensions
Developers can create their own functions using the Gentia Development
Language (GDL). The new functions can be stored and re-used.
Gentia also offers a text management option called Text Infobase, which can
be used for analysis of unstructured, text-based information. Agents can be
used to feed information from this back into the system.
Data mining
One of the sample applications provided with the product is an example of
datamining. Further datamining applications could be developed using the
GDL.
Gentia acquired datamining technology (K.wiz) from Compression Sciences
in May 1998. Gentia is currently investigating plans to integrate some K.wiz
components into the G-MAP platform or future Impact applications. (see
Future enhancements).
Web support
1 2 3 4 5 6 7 8 9 10
Summary
Web access in Gentia is supported by Gentia WebSuite, which uses a CGI
gateway between the usual web server and the Gentia server. It thus enables a
web browser to access Gentia applications and base models.
Version 5.0.2 has removed the need for developers to specifically design
applications for web access, making WebSuite a more integral part of the
product. Web applications offer excellent functionality; there is even a web
version of Gentia’s flagship application, the Renaissance Balanced Scorecard.
However, when accessing base models via the Web, the user interface lacks the
sophistication and some of the functionality provided by desktop client access.
One useful feature is that write-back via the browser interface is supported.
But there could be better exploitation of the Internet as a distribution
mechanism.
Management
1 2 3 4 5 6 7 8 9 10
Summary
There are two locations in which data and users have to be managed: the
GentiaDB and applications developed in the Visual Development
Environment (VDE). But singular administration is available for client-
server and web environments.
The Application Designer provides a single console to administer both
application and data-level security. There is a good range of management
facilities for controlling access rights, and security wizards provide access to
cell level, along with the selective use of pages and objects. The management of
large systems can be largely automated through the use of Agents.
However, the GentiaDB would benefit from tools that allow systems
administrators to easily control and monitor processing tasks and data loads,
and view the GentiaDB log. The MDDB would benefit greatly from a more
flexible method for consolidating hierarchies and tools that help define which
measures should be precalculated or calculated on-the-fly.
Management of models
Separate management interface
Management of applications and their associated objects is carried out in
author mode, through the Warehouse Manager or through the Application
Designer interface.
Security of models
In the application development environment, Gentia has full security
support to define access for authoring and using applications.
Full data-level security, down to cell level, is also provided through security
wizards.
Query monitoring
Facilities are available to track which users have accessed what application
pages and models and when this occurred. There is a workable sample
application provided in the Object Library to analyse this information.
Management of data
How persistent data is stored (not scored)
Multidimensional data is stored in the MDDB. Data retrieved by SQL calls to
relational databases is always freshly retrieved.
Scheduling of loads/updates
Updates and schedules are organised by Agents, which can be scheduled
using point-and-click.
Event-driven scheduling
This is well supported using Agents, and is specified mainly through the use
of point-and-click.
Failed loads/updates
In GentiaDB there is a ‘logger’ transformer that can be used to collect details.
The developer can specify how failed updates are handled, using GDL script
within an Agent. Rollback is provided, and by dividing large updates into
smaller tasks the consequences of failed loads can then be minimised.
Distribution of stored data
One of the great strengths of Gentia is that the data can be partitioned and
distributed anywhere on the network. The ORB enables the application to
access data with location transparency.
Sparsity (only for persistent models)
The GentiaDB solution to sparsity is a multicube architecture, in which each
cube is made up of dense dimensions.
Methods for managing size
Within GentiaDB, all intermediate summary levels are precalculated at
consolidation time (which may be defined as different from load time).
Calculated measures can be either precalculated or calculated on-the-fly.
This is a manual task and there is no wizard support.
In memory caching options
The cache can be configured to optimise performance of the MDDB through
the cache parameter in the config file.
Management of users
Multiple users of models with write facilities
GentiaDB uses the concept of ‘shadow’ pages for writing and consolidation.
Only when write and consolidate are complete is the new data made
available to others. Users reading the data always see a consistent dataset.
User security profiles
User security profiles can be set at an individual or workgroup level and are
defined in the Application Designer or Warehouse Manager. Both user and
author access are defined on a level between one and seven, allowing a fine
level of control. Users are grouped into workgroups to enable sharing of
information within a workgroup. Users can be in several workgroups.
User security can be applied across client-server and web communities and
can be dependent on access mechanisms. For example, administrators may
restrict user or group access to an application either via a locally connected
client or a web browser.
Query governance
This is not necessary for MDDB data and not available for SQL queries.
Gentia relies on the underlying RDBMS for SQL monitoring and governance.
While Gentia cannot stop an SQL query while it is running within an
RDBMS, there is an option for limiting the number of records returned to the
client.
Restricting queries to specified times
There is no direct support for limiting queries to certain times of the day or
week, although they can be scheduled to run at specific times with Agents.
Management of metadata
Controlling visibility of the ‘road map’
The visibility of applications is controlled by the Warehouse Manager, so that
users can only see those to which they have access.
Adaptability
1 2 3 4 5 6 7 8 9 10
Summary
Gentia does not have ‘out-of-the-box’ adaptability but, while requiring some
programming, Agents can provide a powerful means of developing
mechanisms that support adaptability of models and applications.
Metadata
Synchronising model and model metadata
There is no end-user metadata to synchronise.
Impact analysis
There is no support for impact analysis to show the consequences of changing
a model.
Metadata audit trail (technical and end users)
This is not applicable, as there is no real metadata to audit.
Access to upstream metadata
There is no direct integration for enabling the designer to access metadata
generated by the extraction tools.
Performance tunability
1 2 3 4 5 6 7 8 9 10
Summary
The distributed client-server architecture of the Visual Development
Environment offers excellent support for allocating processing flexibly at the
server level, on the client or through a combination of the two. Processing can
also be configured down to the application page level; for example, an
application page could be constructed that automatically switches to server-
based processing for web applications, improving access speed and reducing
network traffic.
Within GentiaDB, performance tunability is largely dependent upon good
design decisions, such as storing (caching) large dimension business
structures locally and using the Gentia server to update those models. The tool
would be enhanced by some wizard support for this process.
ROLAP
A ROLAP option is not directly supported. Although data from SQL sources
can be incorporated into any Gentia application, it will be displayed in
tabular, rather than cross-tabular, form. Gentia can create ‘scenarios’ (virtual
multidimensional cubes) dynamically from SQL data sources, but these are
limited to what can be held in memory.
MOLAP
Trading off load time/size and performance
In GentiaDB, all intermediate summary levels are precalculated at
consolidation time (which may be defined differently from load time).
Calculated measures can be precalculated or calculated on-the-fly to speed
Processing
Use of native SQL to speed up data extraction
GentiaDB and the Gentia Visual Development Environment (VDE) use
ODBC to access relational databases. The company ships Merant’s
DataDirect and SequeLink drivers and manager. Native access to RDBMSs
is provided by Merant’s SequeLink server software, embedded in G-MAP.
Distribution of processing
Processing can be carried out on the client or server, and is specified when
the application is designed. A decision is made at page level. The underlying
Object Request Broker means that the processing can make use of objects
and services regardless of their location.
SMP support
GentiaDB (but not the Gentia Visual Development Environment) is based on
a multi-threaded architecture that can take advantage of SMP.
Customisation
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
As the end-user tool is nearly always an application developed within the
VDE, a restricted interface to enable ease of use for occasional users may be
required, which can be built into the application.
Ease of producing EIS-style reports
The VDE can be used to create both complex applications and simple EIS-
type programs. Additionally, end users can create simple reporting
applications from the predefined reporting templates provided by the
Application Designer.
Applications
Simple web applications
VDE’s drag-and-drop and publish-and-subscribe development environment
can be used to build a simple EIS application to be run in a web browser. The
Gentia approach is to provide web access to all applications.
Development environment
As the development environment is specialised, there is a rich collection of
features, such as tables with drill-down features and charts to speed up the
process of building applications for multidimensional analysis.
The development environment is supported by the Application Framework
layer, which supports an extensive library of objects. The GDL can also be
used to add greater functionality to applications.
The development environment is OLE-compliant and there is extensive
documentation, which is provided on a CD rather than as hard copy.
Use of third-party development tools
Gentia has a published API and it is possible to develop the applications in
C++, Visual Basic or Java. In practice, this is seldom used, as the VDE offers
specialised OLAP functionality, enabling faster development.
Deployment
Platforms
Client
The Gentia Visual Development Environment (VDE) client runs on Windows
95, 98, NT, Macintosh (PPC) and Sun Solaris.
The Gentia WebSuite is supported on all current Gentia server platforms and
supports the following browsers: Netscape Navigator, Microsoft Internet
Explorer and Hot Java Browser.
The Gentia Add-In for Excel runs on Windows 95, 98 or NT. It is supported
on all Gentia server platforms, excluding Netware NLM.
Server
The GentiaDB and Gentia VDE Servers run on Windows NT and Unix (HP-
UX, Sun Solaris, Unixware, AIX, Generic SVR4, Pyramid and NeXT).
Additionally, Gentia VDE supports Netware NLM.
Data access
GentiaDB can access and load data from any ODBC-compliant relational
database, including Oracle, DB2, Informix, Ingres, SQL Server, Sybase,
Dbase, Paradox, Interbase, FoxPro and Btrieve. It can also access data held
in ASCII flat files and Excel spreadsheets.
Gentia is an OLE DB for OLAP consumer and can access third-party MDDBs
that support this standard.
Standards
Gentia VDE supports the OLAP Council’s MDAPI version 0.5 specification. It
has no plans to support version 2 of the OLAP Council specification.
Gentia supports Microsoft’s OLE DB for OLAP API as a consumer, allowing
it to access third-party MDDBs; Gentia plans to provide support as a data
provider with version 6.0.1 due in the fourth quarter of 1999.
Published benchmarks
Gentia does not have any published benchmarks.
Price structure
Pricing is based on the number of named users with a built-in volume
discount; there are six pricing bands, each with a ‘per user’ price. A 50-user
licence costs approximately $150,000.
Pricing for RBSC and Impact applications is also based on named users. In
addition, a restricted user licence (RUL) is available to provide customers
with the ability to build or customise applications using G-MAP component
modules.
Gentia will incorporate training in pricing models for future releases.
Summary
At a glance .............................................................................................. 3
Terminology of the vendor ....................................................................... 4
Ovum’s verdict ......................................................................................... 5
Product overview ..................................................................................... 7
Future enhancements ........................................................................... 13
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 29
Data access .......................................................................................... 29
Standards .............................................................................................. 29
Published benchmarks .......................................................................... 29
Price structure ....................................................................................... 29
Evaluation: Hummingbird – BI/Suite Ovum Evaluates: OLAP
At a glance
Developer
Hummingbird Communications, North York, Ontario, Canada
Version evaluated
BI/Suite version 5.1, comprising BI/Query version 5.0.2, BI/Analyze version
5.1, BI/Web version 2.0 and BI/Broker version 2.0.
Key points
• An integrated desktop query, OLAP and reporting tool
• The server runs on Windows NT and Unix; clients run on Windows 95/98/
NT and Java 1.1-based web browsers
• BI/Suite is based on OLAP and query tools developed by Andyne
Computing, which Hummingbird bought in January 1998
Strengths
• A tightly-integrated suite of tools that is easy to use and manage across
client-server and web environments
• Supports a distributed architecture with load balancing across multiple
servers
• Client tools connect to a wide range of third-party relational and
multidimensional databases
Points to watch
• Accessing data from relational databases to build the model needs BI/
Query, Hummingbird’s query and reporting tool
• No support for complex analytical analysis
• Limited range of front-end tools – HyperCubes can only be analysed
using the BI/Suite client tools
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Ovum’s verdict
What we think
BI/Suite provides an easy-to-use set of tools; its greatest strength is its
simplicity. The client-server and web-based end-user tools provide immedi-
ate access to data – even to inexperienced users with minimal training.
Users also benefit from the ability to schedule reports, refresh HyperCubes
and share them through BI/Suite’s mid-tier broker server. Support for a
single-server architecture gives users the benefits of a shared report and
metadata repository, and load balancing. The payoff for administrators
comes with centralised management and administration. Both thin and fat
clients use the same content, security and user profiles from a single server,
thereby eliminating the hassle of managing multi-server set-ups.
While BI/Suite firmly establishes Hummingbird in the enterprise query and
reporting space further development is needed for its to qualify as an enter-
prise OLAP solution in its own right. The client-server (BI/Analyze) and
web-based (BI/Web) OLAP clients satisfactorily cope with general business
intelligence needs; however, they are not suited to highly complex and
specialised analysis. Users that need this level of functionality will have to
export data to third-party tools such as Excel. Native and OLE DB for OLAP
access is provided for a range of third-party MDDB servers; however, the
product’s client-centric architecture restricts the size of the HyperCubes
(generated from a direct query to the relational database) that can be effec-
tively downloaded and analysed on the BI/Analyze desktop environment.
Hummingbird is rapidly establishing a presence in the business intelligence
market, largely through acquisitions. BI/Suite is Hummingbird’s first prod-
uct in this market. The company’s future success will depend on:
• how well it can integrate BI/Suite and its recently-acquired data
transformation, financial software and knowledge management
technology
• whether it can market this portfolio in a coherent way.
When to use
BI/Suite is most suitable if you:
• want a simple out-of-the-box solution that can be easily rolled-out across
the enterprise
• want to provide integrated query, OLAP analysis and reporting
capabilities to general business users, with minimal training
requirements
• want to perform OLAP on small datasets sourced directly from a variety
of relational databases or flat files
• have to distribute small business models across the enterprise.
Product overview
Components
The important components of BI/Suite version 5.1 are:
• BI/Query version 5.0.2
• BI/Analyze version 5.1
• BI/Broker version 2.0
• BI/Web version 2.0.
Figure 1 shows the primary functions of the components and how they relate
to client-server systems.
Although Ovum Evaluates: OLAP covers the entire suite of BI/Suite tools,
the main focus of this evaluation is the OLAP functionality provided by BI/
Analyze and BI/Web.
BI/Query
This is primarily an end-user tool for building ad hoc queries. It provides a
graphical interface for querying relational databases and generating reports.
BI/Query simplifies the process of data access by creating a semantic layer
(data model) that provides a graphical representation of the structure of a
relational database in familiar business terms.
Results sets can be used to:
• create standard reports (using the integrated report writer)
• act as data sources for creating multidimensional models (HyperCubes).
This indirect method is used to incorporate data from relational databases
and transaction sources into multidimensional data structures defined in BI/
Analyze.
BI/Query is based on an enhanced version of Andyne’s GQL (Graphical
Query Language).
BI/Analyze
This is an end-user tool for desktop OLAP analysis. It includes a
CubeCreator facility for building HyperCubes from BI/Query results sets, as
well as other flat-file data sources. Native access is provided for Hyperion
Essbase and Informix MetaCube. BI/Analyze can also connect to other third-
party MDDBs that support the OLE DB for OLAP interface as a data provider.
BI/Analyze is offered as a standalone desktop OLAP client or as a component
of BI/Suite.
BI/Analyze is an enhanced version of Andyne’s Pablo analysis tool.
BI/Broker
This is an application server that provides shared services, security and
administration functions. It includes a central repository that stores all data
models, queries, results sets, reports and their associated metadata. Admin-
istrators can publish information for multidimensional data sources to the
repository (although data sources are not stored there). BI/Analyze clients
access data sources via the repository. BI/Web uses the App Handlers within
the server to handle connections to data sources.
BI/Broker is built on a CORBA architecture. Visigenic’s ORB technology is
used to deploy application services as distributed objects over the network.
BI/Broker provides three main interfaces for managing the query and OLAP
environment:
• Administrator – primarily addresses server variables such as repository
directories and mapping the HTML links for BI/Web
• User and Group Manager – for end-user management and security
setting
• Scheduler – to schedule queries and refresh reports based on time- or
event-driven criteria.
Other components of BI/Broker include a load-balancing utility and a Session
Manager tool for managing user connections.
BI/Web
A thin client interface that provides query, OLAP analysis and reporting
functions via a web browser. BI/Web uses three main Java applets to gain
access to BI/Broker services. XML is used to render all content – reports,
data, models and query results – to web users.
The BI/Web interface is based on Java-based OLAP technology licensed from
Internetivity.
Architectural options
Full mid-tier architecture
BI/Suite does not support a full mid-tier MDDB architecture; however, BI/
Analyze can implicitly link to these environments as a front end.
Desktop architecture
This is the natural architecture for BI/Analyze; it creates multidimensional
cubes for desktop analysis. All OLAP processing is performed on the client.
Mobile architecture
BI/Analyze has a standalone OLAP engine to support a mobile architecture.
Mobile users can choose to work offline by packaging a report and ‘slicing
off ’ data from a HyperCube or a third-party MDDB server. Data can be
refreshed when the user is reconnected to the data source.
Using BI/Suite
Query first, then analyse
To analyse data from a relational database or a transactional data source in
a multidimensional cube, a query definition has to be defined in BI/Query to
extract the data. The results set then appears as a data source in BI/
Analyze. The query and the cube-creation environments (CubeCreator) are
tightly integrated; a multidimensional structure is generated via point-and-
click.
BI/Query provides an easy-to-use graphical interface to define queries. As
shown in Figure 2, it uses a ‘data model’ to provide a graphical representa-
tion of the database. The data model acts as a mapping layer through which
end users can query the database and return a subset of data. Icons in the
data model can:
• relate to database tables
• be used as virtual tables that include joins from multiple tables or
calculated attributes.
BI/Query provides a number of design windows for creating data models.
These windows are used to specify data objects (which represent tables in
the database) and the relationships that tie them together. The data objects
are the starting point for building queries that retrieve information from a
database. Queries are built by selecting the attributes from tables. Users can
use more than one object in the data model to build a query. BI/Query also
provides a facility for incorporating prompts into queries that make users
enter a value into a qualification.
It is not possible to use more than one query or data source to create a
HyperCube in BI/Analyze; in BI/Query, however, it is possible to combine
multiple queries into a single query. This query consolidates data from
multiple databases into a single source file before it is imported into BI/
Analyze.
Using BI/Query results sets as data sources for BI/Analyze is the most
efficient way of accessing relational data. Users can also create their own
data sources directly from other processes that return query results in a
comma- or tab-delimited flat-file format; saved BI/Query results sets are also
delimited text files.
All the HyperCubes built this way are ‘local’ HyperCubes – that is, stored on
the hard drive or on a network server – and can be distributed to end users.
Future enhancements
A number of minor maintenance releases are planned for 1999. Humming-
bird will port BI/Broker to Solaris, HP-UX and AIX platforms in mid-1999.
The next major release is due in mid-1999. Major enhancements include:
• the ability to drill through to detail-level data
• additional OLE DB features, such as support for dimension member
properties, multiple hierarchies and writeback
• enhancements to security
• tighter integration with Excel
• the ability to automatically load metadata (such as business terms and
short descriptions) into BI/Query data models from the Informatica
metadata repository
• increased analytical functions in the client tools
• enhancing the upper limits on the amount of data that can be stored in a
HyperCube and moving more processing tasks (such as filtering and
ranking) to the server.
In the longer term, support for accessing Oracle Express multidimensional
models is also planned. Integration will also be provided with third-party
metadata repositories for streamlined development. A wizard facility will be
provided, which will guide users in building BI/Query data models based on
the existing metadata. The metadata will also be accessible by BI/Analyze
when building HyperCubes.
Hummingbird intends to integrate its newly-bought extract, transform and
load (ETL) technology with its other products in order to provide an end-to-
end datamart solution. Detailed product announcements will be made in late
1999.
The intention to buy PC Docs will spur the development of an ‘enterprise
knowledge portal’ strategy; this strategy will integrate BI/Suite with PC
Docs’ document and knowledge management framework and Financials and
Case Management systems.
Commercial background
Company background
History and commercial
Hummingbird Communications is a Canadian software company. The com-
pany was founded in 1984 as a consulting firm; it changed direction shortly
afterwards to become a developer of PC-to-host connectivity and terminal
emulation software. It has three main products in this area:
• Exceed – a PC X server that connects Windows PCs to legacy X and Unix
applications
• NFS Maestro – designed to connect PC networks and host computer
systems
• HostExplorer – terminal emulation and connectivity software that links
PCs to IBM mainframes, and the AS/400 and Unix systems.
Hummingbird achieved rapid growth in the core network connectivity
market – it holds just under a 70% share of the PC X server market. Connec-
tivity software remains a profitable business for the company. As the market
began to plateau, Hummingbird reviewed its business direction and markets.
In January 1998, the company diversified into the business intelligence
market by buying Andyne Computing for $60 million. Andyne developed two
products:
• GQL (Graphical Query Language) – a query and reporting tool
• Pablo – a desktop OLAP tool.
These two products were rebranded as BI/Query and BI/Analyze. They were
integrated into the BI/Suite, which was first released in July 1998.
In March 1999, Hummingbird announced that it intended to buy three other
companies:
• PC Docs Group International, a US company that develops document and
knowledge management software. If PC Docs is bought by Hummingbird,
it will be the first evidence of knowledge management and business
intelligence technology coming together
• Context, a New York-based company specialising in software and
consultancy services for the financial industry. Context’s main product is
Financial Frameworks, a packaged software solution aimed at the
financial services sector
• Leonard’s Logic, developers of Genio, a data extract, transform and load
(ETL) tool
Hummingbird completed its first IPO in 1993 in Canada; it later issued two
more public offerings in the US. The company is quoted on the Nasdaq and
Toronto stock exchanges.
Hummingbird employs more than 800 people; its headquarters are in North
York (Toronto), Canada, with offices and distributors worldwide. R&D is
based in Montreal, Quebec and Toronto and Kingston (the original head-
quarters of Andyne), Ontario. Revenues for the 1998 fiscal year (excluding
revenue from bought companies) grew by 10% to $130 million; net income
was $26.6 million. Around one-quarter of the revenues come from business
intelligence tools. The company’s biggest market is North America, which
accounts for approximately 60% of sales.
Customer support
Support
Worldwide technical support is provided by Hummingbird and its distributors
and VARs via telephone, e-mail, fax, the Web and an electronic bulletin
board system. Support (including upgrades) is priced annually at 20% of the
overall licence fee.
Training
Training courses for all components of BI/Suite are available on-site or at
training centres in Canada, the US and Europe. Courses include one-day
introductory classes or two- and three-day intensive programmes for users
and administrators. Training for resellers and distributors is offered.
Consultancy services
Hummingbird’s Professional Services Group provides services for implemen-
tation, and project and strategic consultancy.
In April 1998, Hummingbird bought Datenrevision, a specialist German
data warehousing consultancy.
Distribution
North America
Hummingbird Communications
1 Sparks Avenue
North York
Ontario M2H 2W1
Canada
Tel: +1 416 496 2200
Fax: +1 416 496 2207
Europe
Hummingbird Communications
66 rue Escudier
92774 Boulogne Cedex
France
Tel: +33 1 41 10 0505
Fax: +33 1 41 10 0500
Asia-Pacific
Hummingbird Communications
Level 19, AGL Centre
111 Pacific Highway
Sydney NSW 2060
Australia
Tel: +61 2 9929 4999
Fax: +61 2 9956 6442
http://www.hummingbird.com
E-mail: sales@hummingbird.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
The strength of the tool lies in its ease-of-use – notably, the ability to define
the structure of a model quickly. CubeCreator provides a user-friendly
graphical interface that makes extensive use of wizards and design templates.
An AutoDesign feature eliminates most of the initial work in designing
HyperCubes by creating a ‘starter’ set of dimensions, measures and hierarchies
that can be further refined.
However, the tools are geared towards general business modelling rather
than creating large, specialised models with complex calculations. Developers
can only access a single data source to build a model. Using BI/Query to
access SQL data prevents a sample of source data being available to the
model designer (although OLAP users can bring up a view of the query
results set).
Basic design
Design interface
CubeCreator provides an easy-to-use graphical interface for model design
and incorporates some sophisticated design features. The AutoDesign facility
automatically builds HyperCubes from a BI/Query results set. CubeCreator
automatically parses the results set and builds a ‘starter’ set of dimensions
that can be further refined using wizards, design templates and a drag-and-
drop Editor interface.
Standard design templates can be built and re-used for future design.
Visualising the data source
It is possible to view a sample of the query data on screen; however, the
underlying database tables cannot be viewed.
Universally available mapping layer
The data model acts as a mapping layer for query only.
Prompts for metadata
When building HyperCubes using flat files or BI/Query results sets,
CubeCreator generates most of the metadata (including its location,
attributes, prompts, variables and qualifications). It also describes the
structure of the data in the HyperCubes and includes meaningful
descriptions for members, levels and dimensions.
Developers can include additional metadata when creating a HyperCube
(such as the author, details and a description), but they are not explicitly
prompted to do so.
Multiple designers
Multiple designers
Security governs reading and editing access to HyperCubes. HyperCubes are
locked when they are being edited by a developer, but there are no check-in/
check-out facilities.
Support for versioning
There is no support for versioning.
1 2 3 4 5 6 7 8 9 10
User-definable extensions
A non-procedural language is not provided for defining complex functions.
Custom calculations can be built using nested rules.
Data mining
There is no direct support for data mining. Hummingbird has a partnership
with Angoss for integrating data mining capabilities.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
BI/Web provides the same intuitive interface as BI/Query, with the addition
of a Java-based navigation bar. The web interface provides the same drag-
and-drop OLAP functionality as the fat client, with some exceptions: the
HTML export option is basic, limited to BI/Query-generated reports and
graphs only; and charts created in BI/Analyze cannot be exported in HTML
format.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
BI/Broker provides several graphical administration interfaces for managing
data, schedules, push channels and end users.
Security of models
Security can be set on database table attributes and rows, and elements of a
HyperCube.
Query monitoring
Simple query statistics can be traced, logged and analysed using the client
tools. There is a facility to log performance metrics based on queries.
Management of data
How persistent data is stored (not scored)
Data can be stored on the server – BI/Broker or a remote file server – or
locally, on the client. In a client-server configuration, HyperCubes are always
downloaded on to the client machine for processing. In a thin-client architec-
ture, processing occurs on the server.
Scheduling of loads/updates
BI/Analyze can launch BI/Query to load or update data. HyperCubes can be
scheduled in this way to refresh at specified time periods (hourly, daily,
weekly, monthly, annually or custom). BI/broker notifies users of refreshes
by sending an e-mail or a broadcast message, or distributing it via FTP.
Event-driven scheduling
HyperCubes can be scheduled to load or update data based on external
events; for example, an update to the data source.
Management of users
Multiple users of models with write facilities
Writeback to models is not supported.
User security profiles
Object-level security is managed via the Access Control Manager. BI/Broker’s
User and Group Manager interface provides a drag-and-drop interface for
creating and administering security objects. Security profiles can be created
to govern access to HyperCubes.
Once users and groups have been established, system permissions (which
determine what services users can access) and access privileges (which
determine what items in the repository users can see and use) can be
assigned. Permissions can be set on an individual, group or business role
level. The tool supports a flexible system of privileges that can be assigned
on a report-by-report basis. Adding single users and groups is a painless
task; security permissions are inheritable, allowing for easy application to
large user groups, and security information can also be imported from
Windows NT. The tool would benefit, however, from the ability to save these
security profiles and apply them to other users or domains.
Management of metadata
Controlling visibility of the ‘roadmap’
The visibility of HyperCubes (dimensions and measures), data sources
(tables, rows and values) and associated metadata can be controlled using
the Access Control Manager tool. Additionally, the user profile determines
what system data and resources end users have access to.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
The structural model metadata remains synchronised with the HyperCube
at all times. Descriptive metadata about a model needs to be updated manu-
ally, however, and on a per-model basis.
Impact analysis
Impact analysis is not supported.
Metadata audit trail (technical and end users)
A metadata audit trail facility is not supported.
Access to upstream metadata
BI/Analyze does not support metadata integration with third-party tools,
although it can take the descriptive metadata names from BI/Query and
implement them in the HyperCube. Hummingbird provides PowerView, a
metadata query and reporting tool that has a prebuilt BI/Query data model
– this provides a simplified view of Informatica’s PowerMart repository
tables.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
BI/Broker has a scalable architecture that is enabled through the use of load
balancing. Multiple support servers can easily be added for replicating
application services; however, BI/Analyze is a desktop tool and performance
depends on the size and complexity of HyperCubes loaded on to the client
machine.
ROLAP
BI/Analyze does not support ROLAP operations.
MOLAP
Typically, the size of HyperCubes makes performance and size trade-offs a
non-issue. CubeCreator can update cubes by adding only new data, so that
cubes do not have to be completely rebuilt during a fresh data load. The
extent of consolidation at build time can be customised, but there is no
support for recalculating only the affected values.
Processing
Use of native SQL to speed up data extraction
BI/Query provides native SQL access to most of the leading relational
databases; ODBC support is also provided. BI/Analyze does not access
relational data directly.
Distribution of processing
Processing can be distributed across multiple BI/Broker application servers.
Load balancing is supported by replicating application services across
support BI/Broker servers as needed.
SMP support
The BI/Broker server does not support SMP parallelism.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of a restricted interface
The BI/Query data model can be configured on a per-user basis to provide
restrictive interfaces for subsequent query and OLAP analysis.
Ease of producing EIS-style reports
There is support for producing EIS-style reporting interfaces. This is done by
creating EIS data models using BI/Query, which links buttons to predefined
reports.
Applications
Simple web applications
There is no support for developing web-based analytical applications.
Development environment
BI/Suite does not support its own development environment.
Use of third-party development tools
Integration with third-party development tools is achieved via OLE support
(as both a client and a server).
Deployment
Platforms
Client
BI/Query and BI/Analyze run on Windows 95/98/NT workstations. BI/Web
runs on any Java 1.1-capable web browser.
Server
BI/Broker runs on Windows NT and Unix (Sun Solaris, HP-UX and IBM AIX).
Data access
BI/Query provides native support to access data from all major relational
databases, including Oracle, DB2, Microsoft SQL Server, Sybase, Informix,
Ingres, NonStop SQL, RedBrick Warehouse, Teradata and Unidata. It can
also use ODBC to access other relational sources.
BI/Analyze can connect (natively or via OLE DB for OLAP) to third-party
MDDBs and ROLAP servers, including Hyperion Solutions Essbase, IBM
DB2 OLAP Server, Applix TM1, Microsoft SQL Server OLAP Services, SAP
Business Information Warehouse (SAP BW), WhiteLight, SAS, Informix
MetaCube and NCR TeraCubes. It can also access data held in flat files.
Access to ERP applications is via third-party tools (Acta for SAP and Noetix
Views for Oracle).
Standards
BI/Analyze supports Microsoft’s OLE DB for OLAP as a consumer.
Published benchmarks
BI/Suite does not have any published OLAP benchmarks.
Price structure
Pricing for the Windows NT version of BI/Suite starts at $20,000 for BI/
Broker with core reporting, publishing, scheduling and BI/Web capabilities.
BI/Web’s ad hoc query functionality costs an additional $10,000 and BI/Web
OLAP functionality costs $20,000 for each central BI/Broker. Concurrent and
named user pricing schemes are available for end users: 20 concurrent users
cost $50,000 and named ports are priced at $295 for each user (regardless of
use with fat or thin clients). Standalone versions of BI/Query and BI/Analyze
cost $695 per user. The BI/Query administration tool costs $1,995. Additional
BI/Broker support servers are $4,000 each.
Unix pricing is approximately 50% higher for the server components.
Summary
At a glance ............................................................................................. 2
Terminology of the vendor ...................................................................... 3
Ovum’s verdict ........................................................................................ 4
Product overview .................................................................................... 6
Future enhancements .......................................................................... 15
Commercial background
Product evaluation
Deployment
Platforms .............................................................................................. 33
Data access .......................................................................................... 33
Standards ............................................................................................. 33
Published benchmarks ......................................................................... 33
Price structure ...................................................................................... 33
Evaluation: Hyperion Solutions – Hyperion Essbase Ovum Evaluates: OLAP
At a glance
Developer
Hyperion Solutions, Sunnyvale, California, USA
Versions evaluated
Hyperion Essbase Server version 6.0; Wired for OLAP version 4.1; Hyperion
Integration Server version 1.1; Hyperion Essbase Web Gateway version 2.1;
Hyperion Essbase Objects version 1.1
Key facts
• A multidimensional database server that can be accessed from
spreadsheets, a web browser or a variety of third-party front-end tools
• Server runs on Windows NT, OS/2, Unix and AS/400; clients run on
Windows 95, Windows NT and Macintosh or can use a web browser
• The Essbase MDDB is embedded in more than 60 vertically-oriented
analytical applications developed by Hyperion and its application
partners
Strengths
• Friendly graphical environment for designing and maintaining models
• Provides strong multi-user write-back to the multidimensional database
• Can be accessed by a range of front-end tools, including standard
spreadsheets
Points to watch
• Analysis, presentation and distribution functionality depends entirely on
the front-end tool used
• Presumes a clean and consistent data source – Essbase does not provide
any of its own ETL capabilities
• Questions still remain about the growth and stability of the newly formed
company
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Application
Any analytical application that runs on Essbase. It consists of a
multidimensional database, rule files for loading data and scripts to calculate
data.
Attributes
Detailed descriptive qualities of dimension members; for example, customer
demographics and product details. Attributes in Essbase look and act like
normal dimensions.
Calc script
A procedural script that calculates the multidimensional database or subsets
of the database.
Data block
A multidimensional array of cells. It is the primary storage unit within the
database and is defined during the initial system build.
Data load rules
These are used to import data into the database, and also to define the
hierarchies and relationships within the dimensions. They are used during
the initial build process and for ongoing systems maintenance.
Database
A physical multidimensional data structure that is stored persistently in the
Essbase Server.
Database outline
Defines the structure of an Essbase multidimensional data model, including
the definition of all hierarchies and other relationships, plus many
calculations. It corresponds to Ovum’s definition of a business model.
Dimensional attributes
Essbase provides attribute information in the form of ‘dimensional
attributes’ that are attached to dimensions in the database outline.
Dimensional attributes behave like dimensions; they have structure, can be
cross-tabulated and calculated dynamically in models.
Partitioning
Divides a database into separate parts that can be loaded and calculated in
parallel on multiple servers.
UDAs
User defined attributes (UDAs) are textual tags attached to dimensions that
are used for filtering data.
Ovum’s verdict
What we think
Essbase’s main strength is in providing a consistently fast and easy-to-use
multidimensional database. Its powerful server-based calculation engine
provides a good fit for the logical structure of dimensionally complex business
models.
Essbase scores consistently across most of our evaluation perspectives.
Although many of its features are not unique, users will be hard pressed to
find another OLAP product with a similar range of functionality. The latest
release (version 6.0) builds on Essbase’s hallmark capabilities – namely
performance and modelling simplicity. It excels in its graphical design tools,
which provide a rapid and consistent approach to business modelling without
requiring advanced IS skills. The GUI-based definition of data load rules
simplifies the task of designing highly complex multidimensional models.
Hyperion has worked hard to remove the stigma attached to
multidimensional databases – notably scalability and database explosion.
Version 6.0 extends Essbase’s capabilities to larger dimensional structures
and provides greater agility in supporting attribute-rich data. While
financial planning will remain a ‘bread and butter’ application, these new
features move the product out of its comfortable financial niche to new areas
such as customer-centric analysis. But as Essbase continues to be pushed
into larger and more scalable applications, Hyperion will need to look and act
more like a RDBMS, including the provision of better management facilities
for fault tolerance, rollback and up-time.
Essbase provides a number of prebuilt functions for ad hoc analysis; its
multi-user write-back access is well equipped to support advanced budgeting
and forecasting applications. However, users that require advanced analytic
functionality will have to build it into the server-based model, or integrate
with specialist third-party tools. Essbase supports a choice of third-party
front-end tools, including familiar spreadsheets. However, customers should
choose their front-end tool carefully, because the level of functionality will
vary considerably from tool to tool.
While Essbase has the capability to be used standalone, it is primarily
designed to be used as part of a best-of-breed data warehousing solution. It
therefore presumes a clean data source. Essbase provides rudimentary data
transformation against relational databases, but it relies entirely on third-
party ETL tools for fully-fledged data extraction and scheduling.
Perhaps the greatest concern is the stability of the company following its
merger with Arbor Software. Its problems with disappointing earnings and
stock price has culminated in a number of executive shake-ups. These issues
still need to be fully resolved to convince customers (and shareholders) of
long-term growth. One of the primary missions will be to befriend application
partners that have been alienated by Hyperion’s strategy of being both a
platform for partners’ analytic applications and an application provider itself.
When to use
Essbase is suitable if you:
• need rapid response times for complex multidimensional queries
• have a need for rapid deployment of OLAP to power users across the
enterprise
• are building complex analytical applications that require concurrent
multi-user write access to the database
• have existing spreadsheet skills to exploit.
It is less suitable if you:
• do not have a structured data warehouse or other cleansed data sources
• want an end-to-end business intelligence solution
• have highly specialised analytical requirements that require custom
development
• require a flexible approach to OLAP – Essbase is a MOLAP-only solution.
Product overview
Components
The main components of a Hyperion Essbase OLAP solution are:
• Essbase Server version 6.0
Essbase Server
This is the central component of Essbase, providing a fast, server-based
OLAP engine and a multidimensional database. It also stores all Essbase
application components, including rules for loading data and scripts to
calculate data.
The Essbase Server is responsible for all data loading, OLAP query
processing, calculations and security. Calculations can be precalculated or
done at query time. Essbase databases use patented technology (which
Hyperion calls ‘dynamic dimensionality’) to provide optimisation for the
handling of sparse and dense dimensions in a database for efficient storage
and performance.
A major feature of Essbase is support for concurrent access and update by
multiple users. Essbase provides transparent locking of data to allow multi-
user write access. New attribute handling features also extend its scalability
to cope with models that include large dimensions and deep hierarchies.
Essbase Objects
Essbase Objects is a development environment for building OLAP
applications using ActiveX controls. Essbase Objects consists of a family of
controls for data access, visual data display, data navigation, query and
report layout. A number of third-party ‘Essbase-aware’ controls are also
available.
Architectural options
Web enablement
Web access is managed through the introduction of a web server that
interfaces with the Essbase Server. Users are able to access the Essbase
databases via standard web browsers. There are two options:
• the Essbase Web Gateway uses CGI to link the Essbase Server with a web
server. It utilises HTML to provide interactive analysis and reporting
capabilities
• Wired for OLAP also provides a web interface for constructing ad hoc
queries against the Essbase Server. It is implemented using Java applets.
Wired for OLAP ‘applications’ can be developed via a standard desktop
and deployed for web access. The applications are stored on a mid-tier
server and accessed via the web browser.
Desktop architecture
Essbase is a server-based tool. There is no support for a two-tier desktop
mode.
Mobile architecture
A Personal Essbase version supports a mobile architecture. It allows users to
store a model, or part of a model, on a client PC for offline analysis.
Synchronisation of data and structures between the Essbase Server and
remote clients is supported upon reconnection.
Using Essbase
Essbase supports a set of highly graphical tools for designing and using
multidimensional models. The modelling tools are aimed at power users, and
there is no clear division of responsibilities between model designer and end
user. However, model designers are expected to have some level of DBA-type
skills and a good understanding of the business to use the tools effectively.
The end user can also be the power user/business analyst, or simply an
information consumer. The latter group requires no knowledge of the
database architecture, just an understanding of the business model.
Typically, they access the results of the work done by the model designers.
Creating a model
The Application Manager is the main interface for designing models. Within
Application Manager, the four principal model building tasks of model
definition, calculation, data loading and reporting are clearly defined as
separate functions with appropriate graphical user interfaces for each.
Define the database outline
The database outline determines the structure of the business model. This
outline is created using the Outline Editor, which provides a graphical
representation of the dimension hierarchy; each dimension and consolidated
level in the model is represented in Figure 2.
As dimensions are added to the structure, it is assumed (by default) that they
will be aggregated according to the hierarchy, but this can be easily changed
via a point-and-click interface. While some dimensional hierarchies are
typically created manually, larger ones are invariably loaded directly from
existing systems by importing data and then mapping this to the required
model. If the underlying data sources subsequently change, this process can
be re-run to ensure that the models are always synchronised with the data
source.
Add calculated measures
Calculations are usually defined once, and built directly into the server-based
model. Calculations can easily be applied to any level within a dimension in
the database outline by typing them into the outline or via point-and-click
using a calculator-style interface called the Formula Editor. Essbase provides
a set of mathematical functions and cross-dimensional operators for
constructing calculation formulas.
Multiple formulas and actions can also be placed in calc scripts for advanced
calculations that require a more procedural approach or where multiple
iterations through the data are required. A Calc Script Editor provides a text
editing panel, customised menus, a syntax checker and function, and macro
templates for a point-and-click development environment.
Load in the data
Specifying data load rules is the easiest and quickest way to load data into
the model. The Data Prep Editor provides a graphical way of defining these
rules. Data load rules are sets of operations that Essbase performs on data
from an external data source file as it is loaded or copied into the Essbase
model. They support simple transformations for the mapping of raw data into
the multidimensional model.
Exploring models
Users can access the model via the Essbase spreadsheet client or Wired for
OLAP client; a variety of other third-party front-end tools can also be used.
Navigation around the model is governed by the hierarchies and structures
built-in to the multidimensional model, though the actual method of
navigation varies according to the chosen front-end tool.
At its simplest, the spreadsheet user logs-on and connects to the appropriate
database and double-clicks on any cell to begin a query. The required level
within a dimension can be found by either drilling-down, by typing in the
name of the dimension level, or by using a select option that opens up the
database outline for searching. Data is presented in a standard spreadsheet
from which drill-down and slice-and-dice functions are directly available.
This is shown in Figure 3.
The Spreadsheet Client uses the native Excel or Lotus 1-2-3 environment for
further analysis of data. A query wizard is also provided to help with the
entire process. Alternatively, the Query Designer tool provides a graphical
drag-and-drop method for selecting dimension members and filtering data.
Wired for OLAP offers a similar range of query tools, but in a more
graphically-oriented fashion. Reports can be specified and additional
calculations and sorting methods defined from within the interface. The
designer tools provided by Wired for OLAP also support the presentation of
data in briefing book-type applications and other EIS-style interfaces. The
same capabilities are available on the desktop and via the Web.
The Web Gateway can be used to access the Essbase database from a web
browser as HTML pages. It provides full OLAP functionality such as drill-
down and slice-and-dice, as shown in Figure 4. Write access is also supported
via the web client.
Future enhancements
Version 6.5
The next release of Essbase Server (version 6.5) is planned for the end of
2000. Version 6.5 will focus on enhanced parallel processing capabilities for
the server, specifically for data loading, OLAP queries and OLAP
calculations.
Version 7.0
Version 7.0 (due at the end of 2001) will support a hybrid OLAP architecture,
allowing data to be stored and accessed from either multidimensional or
relational sources. This will allow metadata (essentially the Essbase Outline)
and data to be wholly, or partially, stored in relational database tables. Other
RDBMS-like facilities will also be provided, including security, rollback and
recovery, and fault tolerance. Essbase administrators will be able to mix and
match data storage options between the MDDB and relational tables for
optimal performance. Version 7.0 will also deliver a Java version of Essbase
Application Manager.
Platforms
A Linux version of Essbase is planned for the second quarter of 2000;
Hyperion is a Red Hat Software development partner. Hyperion is also
working closely with IBM to develop an OS/390 MVS mainframe port for
Essbase, which is scheduled for general availability in the first quarter of
2000.
Consolidation
Integration between Hyperion’s tools is also planned on a number of fronts.
Initial efforts will focus on rationalising the various client-server and web
front-end tools to provide a more holistic product offering. For example:
• a new client, called Hyperion Analytic Reporter will replace Hyperion
Reporting and Spider-Man
• a new spreadsheet add-in will combine the existing Essbase and
Enterprise ones (Analyst & Retrieve)
• version 5.0 of Wired for OLAP will also replace the Essbase Web Gateway
• the capabilities of Integration Server will be merged into the Essbase
Application Manager component.
Web-enablement
Hyperion Information Portal is currently under development, and will
provide a single web-based interface to access personalised reports and
underlying business intelligence systems. Hyperion also plans to deliver a
Java version of Essbase Application Manager.
Analytic applications
Hyperion will focus on the provision of ‘Essbase-powered’ analytical
applications for its suite of Enterprise Performance Measurement (EPM)
analytical applications. These will include:
• Hyperion Consolidation – for financial consolidation and management
reporting
• Hyperion Planning – for financial budgeting and planning
Commercial background
Company background
• OEM partners, such as IBM and ShowCase, that offer operating system
and relational database solutions for Essbase
• data integration (ETL) partners that either publish data to Integration
Server or produce multidimensional cubes directly. Hyperion has alliances
with leading ETL vendors, such as Acta, Informatica, Sagent, Ardent/
Informix and Constellar
Customer support
Support
247 hot-line telephone support as well as on-site support arrangements are
available worldwide. The support operations of Hyperion and Arbor will be
merged. Support is included along with software updates for an annual
maintenance fee of 18%.
Training
Three-day introductory training courses on Essbase are available for power
users. A two-day course is also provided for systems administrators. Casual
end users do not require much training beyond a familiarisation with the
business model.
Consultancy services
Hyperion has a large consultancy organisation, consisting of 350 consultants
worldwide. However, it offers a limited range of consultancy services for
Essbase, and no significant revenues are generated from this.
Hyperion also has formal relationships with many of the global management
consultancies and systems integrators (such as EDS, Shell Services and
Perot).
Distribution
North America
Hyperion Solutions Corporation
1344 Crossman Avenue
Sunnyvale
CA 94089
USA
Tel: +1 408 744 9500
Fax: +1 408 744 0400
Europe
Hyperion Solutions UK
Arbor House
Old Bracknell Lane West
Bracknell
RG12 7DD
UK
Tel: +44 1344 664000
Fax: +44 1344 664001
http://www.hyperion.com
E-mail: info@hyperion.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
Summary
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
Business models are defined and maintained in database outlines using the
graphical and highly intuitive Outline Editor. Users (with appropriate
security rights) can easily drag-and-drop dimensions, and specify
relationships between dimensions from this interface.
Visualising the data source
The Data Prep Editor and the Integration Server allow model designers to
view the file/table layout of source data held in relational databases and text
files. A sample of the data can also be viewed through a simple graphical
interface.
Universally available mapping layer
Integration Server’s metadata ‘catalogue’ provides a mapping layer to access
data stored in a relational database. The mapping layer is universally
available to end users.
Prompts for metadata
Designers are prompted to include names for dimensions when creating the
database outline. They are also given the option to create additional model
metadata about the status of a model, but are not explicitly prompted to do
so.
Multiple designers
Multiple designers
Database outlines can be shared; the outlines are locked when being edited.
However, there are no check-out/check-in facilities.
Support for versioning
There is no support for versioning control.
Summary
1 2 3 4 5 6 7 8 9 10
Simple regression
Essbase relies on the Spreadsheet Client to provide regression analysis.
Time series forecasting
There is no support for time series forecasting methods.
User-definable extensions
Calc scripts can be defined to provide server-based analytical functions. Calc
scripts enable users to define complex formulas using procedural logic. More
than 200 server-based functions are supported by the scripting language.
Datamining
Essbase does not provide any support for datamining.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
Application Manager serves as Essbase’s main management interface. All
key administrative functions, including model building, data loading and
security access, are managed through pull-down menus and toolbars.
Security of models
The integrity of models is controlled through a combination of:
• user security profiles; individuals or groups of users are granted or denied
the ability to view, change or create a model
• a multi-layered approach for intra-model security; a filter layer defines
read/write access levels for dimension levels (down to cell level).
Query monitoring
Essbase does not provide any query monitoring facilities.
Management of data
How persistent data is stored (not scored)
The Essbase Server stores multidimensional data persistently. The data is
refreshed periodically from back-end data sources – there is no caching of
data on the server or the client. If application partitioning is used, a database
may either be stored across multiple servers or be divided into a number of
sub-models (or partitions).
Scheduling of loads/updates
The scheduling of data loads and updates is handled either through a batch
control facility supported by the Essbase scripting language or by using
third-party tools such as Seagate Info.
Event-driven scheduling
There is no support for event-driven scheduling.
Failed loads/updates
Essbase informs administrators of complete, partial and failed loads. It
generates an error log file and provides a detailed list of records that did not
load.
Distribution of stored data
The partitioning facilities in Essbase allow multidimensional models to be
designed in a variety of ways and stored across separate servers. A single
model may be partitioned across Essbase Servers, with a ‘virtual’ model for
central consolidation. Cross-model calculations are supported via the use of
location aliases – effectively a ‘join’ between models.
Management of users
Multiple users of models with write facilities
Essbase employs a data block locking scheme for handling multiple users
writing back to the database. It issues exclusive write locks for data blocks
when they are being updated; other users are able to access the data blocks
in a read-only mode.
User security profiles
Users with ‘supervisor’ privileges have full access to all system components
and functions. Four other levels of security rights for model access can be
defined for individuals or groups of users. These are read, write, calculation
and database designer.
Query governance
Generally, there is no concept of query governance within Essbase. However,
dynamic attribute calculations can be restricted by user security.
Restricting queries to specified times
There is no support for restricting queries to a particular time of day.
Management of metadata
Controlling visibility of the ‘road map’
Essbase’s overall security mechanisms govern the visibility of the metadata
road map. Users are only able to see the metadata and data they have been
granted access to. Access to parts of the metadata catalogue in Integration
Server can similarly be restricted. Other than this, there are no facilities to
hide or show metadata selectively.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Essbase models can adapt to change, but there is limited support for the
management of change. Users can take advantage of the drag-and-drop
method for adding new dimensions and measures in models. Changes in
underlying data sources can be automatically uploaded to the
multidimensional database as part of a standard batch update process. There
are no facilities for ensuring that metadata remains synchronised with
changes to models and/or data sources. Essbase does not provide any
facilities for impact analysis and there is no direct integration with upstream
metadata.
Metadata
Synchronising model and model metadata
Metadata comments and descriptions added to the database outline are not
automatically synchronised.
Impact analysis
Essbase does not support impact analysis.
Metadata audit trail (technical and end users)
Essbase does not provide an audit trail facility for end users.
Access to upstream metadata
Essbase cannot access metadata in third-party tools – although it does have a
preference for Acta for ERP data integration. However, as part of the
Integration Server development programme, a number of partnerships have
been announced with vendors in the ETL arena. This will give Essbase the
capability to access the metadata of these products and the underlying data
for rapid model development and increased adaptability.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Essbase is a MOLAP-oriented product. Although Integration Server executes
ROLAP-like SQL queries directly to relational databases, the returned data
is staged in a prebuilt multidimensional model prior to analysis.
MOLAP
Trading off load time/size and performance
An Essbase model can have a mixture of precalculated and dynamically
calculated variables to avoid database explosion. Essbase can load updates
incrementally and subsequently calculate only those parts of the database
that are affected by the changes. Parallel loading and recalculation of
partitions also improve load performance.
Version 6.0 of Essbase also provides a new memory-based data cache for
increased performance. This provides the ability to load the index file
(dimension member combinations look-up for data blocks) in memory and set
user retrieval buffers; when a user queries a block of data it goes straight
into memory. Any dimension structure change will trigger a restructure that
is done in memory.
Processing
Use of native SQL to speed up data extraction
Data access is via ODBC drivers only.
Distribution of processing
Essbase’s Application Partitioning function enables developers to
simultaneously load and calculate Essbase models across several Essbase
Servers (or multiple processes in a single server).
SMP support
Essbase Server is based on 32-bit multi-threaded software that takes full
advantage of SMP parallelism.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
The Essbase Spreadsheet Client relies on the customisation features of Excel
to provide restricted views. Wired for OLAP provides greater scope for
customisation.
Applications
Simple web applications
The Essbase Web Gateway can integrate with standard web authoring tools
to produce HTML applications. The Gateway can also integrate Java and
ActiveX components and generates dynamic HTML form controls that can
interact with standard JavaScript and VBScript to add greater functionality
to web applications. Essbase Objects can also be used to produce ActiveX
applications for web deployment.
Development environment
Essbase does not provide its own visual development environment. Essbase
Objects can link into graphical development languages such as Visual Basic.
Developers (or VARs) can assemble the ActiveX controls to build EIS-style
interfaces to Essbase. A number of third-party ‘Essbase-aware’ controls are
also provided by partners such as SPSS and John Galt Solutions.
Use of third-party development tools
Development tools (including Visual Basic, Microsoft Visual C++, Delphi and
PowerBuilder) can be used to integrate with Essbase Objects or the
spreadsheet interface.
Deployment
Platforms
Client
Essbase Spreadsheet Client (Excel and Lotus 1-2-3) and Wired for OLAP run
on Windows 95, Windows NT and web browsers.
Personal Essbase and Hyperion Integration Server clients run on Windows
95.
Server
Essbase Server runs on Windows NT (Intel and Compaq Alpha), OS/2 and
Unix (HP-UX, RS6000/AIX and Solaris). Showcase Corporation has also
ported Essbase to the AS/400 environment.
Hyperion Integration Server runs on Windows NT and Unix (HP-UX and
Solaris).
Data access
Essbase can access data from the major relational databases (Oracle,
Informix, Sybase, IBM DB2 and Microsoft SQL Server) and any other ODBC-
compliant database. It can also access data in text files and spreadsheets.
Hyperion Application Link can be used to integrate with third-party business
applications. Hyperion currently has or is developing certified links to all the
major ERP and CRM transaction systems, including SAP, Oracle, Lawson,
JD Edwards and Siebel.
Standards
Essbase has a published API that has been adopted by more than 300 third-
party application, service and tools partners to provide integration with
Essbase.
Wired for OLAP supports OLEDB for OLAP as a consumer.
Published benchmarks
Hyperion published figures for the OLAP Council’s APB-1 OLAP benchmark.
Essbase 6.0 performance figures for the APB-2 benchmark is also planned.
Price structure
Essbase Server is priced at $60,000 for a ten concurrent user licence.
Integration Server is priced at $20,000 per Essbase Server.
Essbase Objects and Essbase Web Gateway are both priced at $10,000 per
Essbase Server, with an unlimited developer/user licence. Wired for OLAP
clients cost $600 per seat for Windows and from $100 per seat for Java. Other
Essbase tools and modules are licensed separately.
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 5
Future enhancements ........................................................................... 12
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 28
Data access .......................................................................................... 28
Standards .............................................................................................. 28
Published benchmarks .......................................................................... 28
Price structure ....................................................................................... 28
At a glance
Developer
Information Advantage, Eden Prairie, Minnesota, USA
Versions evaluated
DecisionSuite version 5.7
Key facts
• A ROLAP tool with a server-based OLAP engine
• Server runs on Unix platforms; clients run on Windows 3.1, Windows 95,
Windows NT and web browsers
• Acquired IQ Software, an enterprise query and reporting tool vendor, in
September 1998
Strengths
• A scalable system – query processing is automatically optimised between
the server and the RDBMS
• Supports a friendly notebook-style interface across all the client tools
• Flexible scheduling, report sharing and messaging facilities are matched
by few tools
Points to watch
• Server runs only on Unix platforms
• Limited support for advanced analytical and forecasting functions
• An expensive solution for small projects – aimed at organisations with a
large-scale data warehouse strategy
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Terminology of the vendor
Agents
A background process that users define to automatically run reports at a
pre-determined date and time. When combined with event triggers, they also
provide a means of automating analysis and reporting tasks.
Alerts
Alerts are part of DecisionSuite’s messaging system. For example, an agent
can announce the availability of an automated report by sending an alert to
one or more DecisionSuite users.
Category
Limits user access to the metadata tables to provide a particular view of the
data warehouse. Facts and filters are associated with a category, and all
reports are defined in relation to a category.
Facts
A generic term for data included in reports. Facts can be either data items
stored in database tables or calculations that are derived from stored data
items and formulae. Facts are defined in the metadata tables and viewed in
reports.
Filter
A static or dynamic constraint on the data presented in a report. They are
used to define logical groups of dimension items for inclusion in reports.
Filters are re-usable components stored in the metadata tables.
Metadata tables
DecisionSuite uses relational databases to store two types of metadata. The
first type is used to map a data warehouse structure to a business model.
The second type of metadata concerns all the other elements of the
DecisionSuite environment and client applications, including reports, tem-
plates and filters.
Report
A presentation of data organised according to a report definition that speci-
fies a particular view of the business model and its layout. DecisionSuite
reports are interactive, and different perspectives of the model can be
achieved.
Report template
A report definition saved as a template that defines its layout, content and
properties. Unlike a report, it does not contain results of processing the
definition.
Ovum’s verdict
What we think
DecisionSuite is an attractive ROLAP solution for customers requiring
access to corporate data stored in large, finely-tuned data warehouses. The
tool’s scalability is underpinned by a well designed server-based architec-
ture, including an object request broker and a proven ROLAP engine that
maximises the use of RDBMS technology while addressing the limitations of
SQL.
The DecisionSuite designer and end-user tools are easy to use and well
integrated; they share the same friendly notebook-style interface that will be
appreciated by casual and power users alike. DecisionSuite’s flexible report
scheduling, sharing and distribution options are matched by few other OLAP
tools and provide strong support for group working environments. However,
it lacks the analytical complexity needed for advanced and specialised
analysis models.
DecisionSuite requires a strong commitment to the ROLAP approach. The
server component only runs on Unix and connects to Unix-based RDBMSs.
Information Advantage argues that it has focused on the quality of its data
access rather than the quantity. Hence, it concentrates on providing
optimised, native access to those RDBMSs it has chosen to support.
DecisionSuite, like most ROLAP tools, can be difficult to implement because
any purchase decision usually involves a wider data warehousing considera-
tion. Customers without a data warehousing strategy will probably need to
buy-in some consulting and migration assistance. This means that
DecisionSuite is an expensive OLAP solution; the tool’s pricing strategy is
aimed primarily at ‘big-ticket’ accounts.
When to use
DecisionSuite is suitable if you:
• are already committed to a large-scale data warehouse strategy, or are
preparing for one
• want to develop customer relationship management applications that
analyse large sets of data
• have a requirement to easily share reports between large numbers of
users
• have a strong commitment to Unix.
It is less suitable if you:
• want to perform OLAP against normalised data sources
• want to develop highly customised OLAP applications
• have a need for advanced or specialised analytical functions
• need a flexible business model – the business model is tied closely to the
structure of the data warehouse.
Product overview
Components
DecisionSuite consists of the following components:
• DecisionSuite Server version 5.7
• DecisionSuite Client version 5.7; includes Analysis, NewsLine Data
Workbench and Application Workbench
• WebOLAP version 5.7.
Figure 1 shows the primary functions of the components and whether they
run on the client or the server.
DecisionSuite is a ROLAP tool designed to run directly against relational
data stored in a data warehouse. It uses a server-based OLAP engine to
interpret end-user queries and dynamically generate SQL queries. The tool
works best against relational data (typically aggregate tables) that are
organised in either star, snowflake, federation or constellation schemata.
DecisionSuite assumes that it will work against a cleansed data source;
therefore data warehouse population, data cleansing and advanced data
transformations are beyond its scope.
DecisionSuite Server
A Unix-based ROLAP engine that processes client requests against a data
warehouse. The server carries out a significant amount of data processing
(joins, aggregations and calculations). The server also takes care of security
and manages predefined DecisionSuite Agents to perform various back-
ground processing tasks and services. An Active Report Server component
acts as a repository for storing reports and report definition templates.
Report scheduling and distribution capabilities are also supported.
The ROLAP engine uses an intermediary metadata layer to dynamically
generate SQL for a query, and delivers formatted content back to the presen-
tation tier. The metadata layer provides a business-oriented map of the
underlying database table structures, which automatically synchronises
applications with changes in the RDBMS. This information is stored in a
series of metadata tables, usually in the data warehouse. The metadata can
also map data stored in more than one RDBMS.
WebOLAP
Enables reports to be accessed and analysed from a web browser. It provides
NewsLine-like facilities. WebOLAP is closely integrated with the
DecisionSuite Server (via CGI), with reports dynamically generated in
HTML.
Architectural options
Full mid-tier architecture
DecisionSuite is a ROLAP tool and does not implement a full MDDB store
on the server.
Desktop architecture
DecisionSuite is a server-based tool and does not support a two-tier desktop
architecture; all processing is done on the server.
Mobile architecture
This architecture is not directly supported by DecisionSuite. However, a
single-tier mode is possible via a partnership with Brio Technology. The two
companies have developed DecisionSuite Brio Connection, a facility that
supports the transfer of DecisionSuite data into Brio’s Brio.Insight client
tool for detached, offline analysis.
Using DecisionSuite
DecisionSuite provides a number of tools that distinguish clear responsibili-
ties for designers, end users and administrators.
The metadata tables used to map the data warehouse structure to business
dimensions are defined by experienced DBAs using the Data Workbench
tools. These users are expected to have a good understanding of the data
warehouse and SQL syntax.
Reports can be defined by DBAs, but can also be created and viewed by
business end users using the Analysis client. Experienced power users can
use this interface to enhance models by including their own custom meas-
ures and filters. The NewsLine interface provides a simple interface for
‘information consumers’ that only require easy viewing access to reports
scheduled by the DecisionSuite Server.
Administrators are provided with a separate Workbench interface for man-
aging end users. The Application Workbench provides a number of graphical
tools that enable managers to configure user security profiles and govern
database queries according to time and the size of results sets returned from
the RDBMS. The Workbench also provides the management interface for the
Active Report Server component, allowing distribution schedules to be built
and developing agents that ‘push’ results directly to end users via alerts,
e-mails or report attachments.
Company background
History and commercial
Information Advantage was formed in 1990, following IBM’s purchase of
Metaphor, the EIS/DSS vendor. The Metaphor product group was absorbed
into IBM, but the consulting arm set up a new company which became
Information Advantage. For the first years of its existence, the new company
concentrated on providing consultancy. It developed its first product, a Unix-
based decision support engine called Axsys, as a by-product of its consul-
tancy work. Axsys eventually evolved into DecisionSuite Server. The
DecisionSuite client tools were first released in August 1994.
In 1993, the company obtained venture capital funding in order to develop
the product side of the business. Information Advantage is now focused on
being a product vendor, and has been building up its direct sales force from
offices in the US and Europe. In December 1997, Information Advantage
completed its IPO. Revenue for fiscal 1998 grew 118% to $25.6 million.
In September 1998, Information Advantage acquired IQ Software, an
enterprise query and reporting tool vendor, for $65 million. Although the two
companies had radically different product lines and sales models,
Information Advantage is now working towards product synergy. The
combined company (which retains the Information Advantage name)
employs 420 people and has its corporate headquarters in Eden Prairie,
Minnesota, with 27 subsidiaries and a network of distributors worldwide. It
is valued at $75 million and has more than 1.7 million users of its products
worldwide.
Training
A number of public and on-site training courses are provided. These include
a one-day introductory course for casual end users, a two-day course for
analyst-type users and a four-day technical course for IS developers and
DBAs. Computer-based training is also available.
Consultancy services
Information Advantage’s Services division consists of experienced data
warehousing consultants and systems integrators that focus on specific
vertical sectors. Consultants use Information Advantage’s DecisionPath
methodology for implementation.
Distribution
US
Information Advantage
7905 Golden Triangle Drive
Eden Prairie
MN 55344
USA
Tel: +1 612 833 3700
Fax: +1 612 833 3701
Europe
Information Advantage International
3 Furzeground Way
Stockley Park
Uxbridge
Middlesex, UB11 1JF
UK
Tel: +44 181 867 4600
Fax: +44 181 867 4699
http://www.infoadvan.com
E-mail: marketing@infoadvan.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
The Data Workbench provides a graphical interface for mapping the data
warehouse structure onto DecisionSuite metadata. This interface displays
the metadata in spreadsheet-style tables. The Data Workbench is adequate
for this task, but it would be better if there was an overview of the main
elements, rather than just a series of tables. It would also help if it included
dialogues and pick lists to help with the maintenance of the metadata. The
wizard provides dialogues and pick lists during the metadata creation.
Reports (sets of dimensions, calculations and filters) represent the business
model. The design interfaces for both metadata and reports share the same
general style of interface.
Visualising the data source
Designers can see a sample of data from a selected relational table. However,
they cannot view the overall database schema.
Universally available mapping layer
Metadata tables can be defined to map dimensions, measures and hierar-
chies to specific parts of the data warehouse. Categories provide end users
with a restricted view of the metadata tables.
Prompts for metadata
Designers are not automatically prompted to add additional metadata when
creating the metadata tables or defining reports.
Building the dimensions
Selecting columns for the dimensions
Columns for dimensions can be selected using point-and-click. A wizard
facility is provided to speed up the mapping process.
Selecting the members shown in a dimension level
Filters can be used to select dimension members. Filters are created by
point-and-click. There are three types of filter: dynamic, static and level. The
differences are related to the type of SQL generated.
Defining a dimension hierarchy
Developers can easily define drill-down hierarchies using point-and-click.
Multiple and split drill-down hierarchies may be defined. Unbalanced
hierarchies are also supported.
Time dimension
Time dimensions must be defined according to standard or custom time
periods in the business model. Multiple time dimensions are supported, and
filters can be used to define non-standard time periods, such as fiscal year
and lunar months.
Annotating the dimensions
Dimensions in the model can be assigned long and short name descriptions
by designers, which can subsequently be viewed in a DecisionSuite report by
end users. End users cannot edit these dimension descriptions.
Default level of a dimension hierarchy
Designers can define a default level for a dimension hierarchy when opening
a report.
Multiple designers
Multiple designers
DecisionSuite does not provide any special support for multiple designers.
Support for versioning
There is no direct support for versioning control.
Other ‘building the business model’ features
DecisionSuite has links to Logic Works’ Erwin data modelling software,
which is able to create DecisionSuite metadata tables.
The ETL tool Syntagma from Relational Matters also integrates with
DecisionSuite. It is able to build and populate DecisionSuite metadata and
aggregate tables.
1 2 3 4 5 6 7 8 9 10
User-definable extensions
A scripting language can be used to create add-ins that integrate with third-
party products (such as SPSS) to access advanced analytical functions.
Data mining
DecisionSuite does not provide support for data mining.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
WebOLAP provides strong web access for accessing and analysing predefined
reports. However, web users cannot define new reports or add new filters or
calculations to the report definitions. Reports can be easily published and
distributed to a wide range of users over the Web using Internet-based search
engines, hyperlinks and e-mail.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of data
How persistent data is stored (not scored)
DecisionSuite processes data directly from the RDBMS and creates multidi-
mensional models at runtime which are cached on the server. However, once
a report has been defined, the data can be stored persistently on the
DecisionSuite Server or any other application server, and can be periodically
refreshed for current data.
Scheduling of loads/updates
The loading of data into the data warehouse is outside the scope of
DecisionSuite. Once it has been loaded and stored as part of a report defini-
tion, a scheduler facility can be used to automate the refresh of reports.
Scheduling can be based on times, dates or events. Users can apply a refresh
schedule to a group of reports.
Event driven scheduling
Event triggers can be specified for updating existing reports or scheduling
new reports. Triggers can be based on events such as an update to the data
warehouse or events external to the OLAP environment.
Failed loads/updates
An agent may be set up to look for the completion of a report update and
then alert users. All agent tasks are persistent, and therefore automatically
re-submitted if the update fails.
Distribution of stored data
Data is stored persistently in the database or the DecisionSuite Server (as a
report). When a query is executed, the data is temporarily cached on the
server at runtime; there is no caching on the client.
Sparsity (only for persistent models)
DecisionSuite uses two analytic ‘workspaces’ to efficiently process dense and
sparse data returned from the RDBMS. DecisionSuite dynamically routes
data to the appropriate workspace based on its sparsity percentage. For
sparse data models, DecisionSuite automatically uses multidimensional
b-tree, while for dense data models, data is returned as a multidimensional
array.
Methods for managing size
The size of the server cache is subject to size restrictions based on query
governance definitions.
In-memory caching options
In-memory caching is not supported.
Informing the user when stored data was last uploaded
Each report is time-stamped with information about when the data was last
updated. This information is not automatically displayed.
Management of users
Multiple users of models with write facilities
Typically, DecisionSuite is designed to permit simultaneous read-only access.
User security profiles
The DecisionSuite Server uses a flexible security model to connect to the
RDBMS, with anything from a one-to-one user to connection relationship, to
all users sharing the same connection. User profiles grant access to parts of
the DecisionSuite application environment and metadata. Profiles can be
assigned on an individual or workgroup basis. The profiles are also closely
linked to categories, which define user access to parts of the data warehouse
and available calculations and filters.
Query governance
Administrators can define the maximum number of concurrent processes
used by DecisionSuite Server at any given time. They can also control the
maximum number of rows returned and processed on a user profile basis,
and specify the maximum time a query is allowed to run in the database.
Restricting queries to specified times
There is no support for restricting queries to specific times of the day.
Management of metadata
Controlling visibility of the ‘road map’
The category definition controls access to the metadata a user can access.
This definition determines the model metadata, calculations and filters that
can be included in a report for a particular user or groups of users.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
DecisionSuite automatically generates multipass SQL statements.
Options for SQL processing
An important feature of DecisionSuite is its ability to intelligently balance
SQL processing between the DecisionSuite Server and the database.
Speeding up end-user data access
The server cache is volatile, and cannot be stored and revised.
Aggregate navigator
DecisionSuite can automatically access the highest level aggregate tables in
the database. It calculates the Cartesian cross-product of dimensional data
models which then produces aggregate-level priority information.
MOLAP
DecisionSuite is a ROLAP tool.
Support for multiple users
Information Advantage claims that the DecisionSuite Server-based architec-
ture can support many users without degrading performance. It has many
customer sites with more than 1,000 concurrent users running
DecisionSuite reports against large data warehouses.
Processing
Use of native SQL to speed up data extraction
DecisionSuite uses native SQL interfaces to connect to all the major
RDBMSs. It also uses ODBC for Unix to connect to Red Brick, Teradata and
HP-Intelligent Warehouse data warehouses.
Distribution of processing
A client request is automatically routed to the least utilised DecisionSuite
Server for processing. There is no automatic load balancing between these
servers, because each functions independently.
It is, however, possible to balance processing between the database server
and the DecisionSuite Server.
SMP support
DecisionSuite Server takes full advantage of SMP technology.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
Various aspects of the DecisionSuite tools’ interface can be modified to
provide restricted or extended views and functionality.
Ease of producing EIS-style reports
Application Workbench provides an ‘add-in’ facility to extend or link pre- and
post-process operations for reports. Typically, these are calls to an external
procedure, such as a Windows application or a Unix shell script, and are
used to customise the execution or results of a report, or add new capabili-
ties.
Applications
Simple web applications
A web gateway API is provided for the development of simple EIS interfaces
in HTML or JavaScript.
Development environment
DecisionSuite does not have a visual development environment. It does
provide a scripting language for defining server-based procedures for inter-
action with external systems or data. The scripting language is a cross
between Visual Basic and Unix shell scripts, and uses the standard ‘vi’
editor.
Use of third-party development tools
DecisionSuite client DLLs can be called by development tools such as Visual
Basic, PowerBuilder and Visual C++.
Platforms
Client
DecisionSuite clients run on Windows 3.1, Windows 95 and Windows NT.
WebOLAP runs on standard web browsers including Netscape, Microsoft
and Mosaic.
Server
DecisionSuite Server runs exclusively on Unix: HP-UX, IBM AIX, NCR, SGI
IRIX, Sequent, Sun Solaris, Data General DG/UX, Digital Alpha, Siemens
Reliant and Unisys SVR4.
Data access
DecisionSuite provides native access to the following relational databases:
Oracle, DB2, Sybase, Informix, Tandem and MDI. ODBC for Unix drivers are
supported to provide access to Teradata, HP-Intelligent Warehouse and Red
Brick.
Standards
DecisionSuite has its own proprietary server and client APIs. The
DecisionSuite OLE DB connection provides support for Microsoft’s OLE DB
for OLAP API. WebOLAP supports HTML, Java and JavaScript.
Published benchmarks
Information Advantage has not published any OLAP benchmarks.
Price structure
Pricing depends on the number of servers and registered users, and the size
of the underlying database. Typical entry level pricing is $150,000 for the
DecisionSuite Server and 50 users. Clients are priced separately:
• NewsLine costs $200
• Analysis costs $1,200
• Data Workbench costs $16,500
• Application Workbench costs $6,600.
SQL Server 7.0 OLAP Services
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 5
Future enhancements ........................................................................... 12
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 26
Data access .......................................................................................... 26
Standards .............................................................................................. 26
Published benchmarks .......................................................................... 26
Price structure ....................................................................................... 26
At a glance
Developer
Microsoft, Redmond WA, USA
Versions evaluated
Microsoft SQL Server 7.0 OLAP Services (Beta 3 and Final Feature
Editions)
Key facts
• A multidimensional engine that can support MOLAP, ROLAP and
HOLAP
• OLAP Services runs on Windows NT; end-user tools runon Windows 9x
and NT
• Comes free with SQL Server 7.0 Enterprise and Standard Edition
Strengths
• Extensive wizard support makes it very easy to use
• Easy to move between MOLAP, ROLAP and HOLAP storage options
• Wide range of end-user tools available from third parties
Points to watch
• Not a total OLAP solution, requires an end-user tool
• Has no ‘ready-to-run’ web features
• Not yet integrated with the Microsoft Repository
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Terminology of the vendor
Cube
Microsoft’s term for the multidimensional business model.
HOLAP
The details are stored in a relational database, and the aggregates in the
MDDB.
Library
The library supports the re-use of dimensions, mappings to data sources and
roles. When defining a model, dimensions can be freshly defined (and option-
ally stored in the library) or library definitions used.
MOLAP
The details and aggregates are stored in the MDDB.
Partition
A model can have multiple partitions, all of which have the same dimen-
sions. However, the partitions can be stored in different locations, have
different data storage options (that is MOLAP, ROLAP or HOLAP) and
different degrees of optimisation.
ROLAP
The details and aggregates are stored in the RDBMS.
Ovum’s verdict
What we think
OLAP Services is a product that sets new standards of ease of use and
confirms OLE DB for OLAP as the de facto standard for accessing multidi-
mensional databases. Outside of its use in a small workgroup, it is more
accurately described as a component rather than a complete solution.
The outstanding features of Microsoft’s OLAP Services are its ease of use
and its architectural flexibility. It has more than 30 wizards, which enable
straightforward multidimensional models to be built entirely using point-
and-click. One of the most impressive wizards is the data storage and aggre-
gation one, which enables the data storage architecture (MOLAP, ROLAP or
HOLAP) to be selected with one click. The visual display of the trade-off
between size and performance makes optimisation, even for the naïve user,
an easy operation. This ease of use, combined with the fact that OLAP
Services is are bundled with most versions of SQL Server 7, makes it an
appealing introduction to OLAP.
While the tool has high initial appeal, it is not a complete corporate solution.
The most obvious need is for an end-user tool, but this poses little difficulty
because entry-level tools can be freely downloaded from the Web, and many
third-party tools use the OLE DB for OLAP interface. However, a basic end-
user tool may still only provide a minimal system. Within a corporate envi-
ronment, OLAP requires report production and distribution facilities, web
access, customised applications, advanced analytics and metadata for end
users. OLAP Services provides very little of this. In this context it is a useful
component, but requires considerable supplementing.
When to use
The OLAP Services multidimensional engine should be considered if you
want:
• ease of use as a priority
• flexibility of storage options
• to be able to use a wide range of end-user tools
• a low-cost introduction to OLAP.
It is not suitable if you want:
• built-in financial or statistical functions for complex analytics
• to define complex models with user-defined levels
• more security than that provided by Windows NT
• a complete (client and server) solution from one vendor.
Product overview
Components
The main components of SQL Server 7 Enterprise and Standard Edition are:
• Data Transformation Services (DTS)
• the SQL Server Engine
• the Microsoft Repository
• Microsoft SQL Server 7.0 OLAP Services.
SQL Server 7 does not include a front-end tool for OLAP Services. The
Microsoft product for this will be Microsoft Excel version 9 (not released at
the time of writing).
The main focus of this evaluation is OLAP Services.
Figure 1 shows whether the component usually runs on the client or the
server, and its primary purpose.
OLAP Services
OLAP Services is a multidimensional engine that can access data from any
OLE DB source, and in turn can be accessed by any tool with an OLE DB for
OLAP as a consumer interface. It offers a variety of storage options, includ-
ing detail and aggregates stored in the MDDB (MOLAP), details and aggre-
gates stored in an RDBMS (ROLAP) and a hybrid combination in which
details are stored in an RDBMS and aggregates in the MDDB (HOLAP).
Several multidimensional business models can be combined to form a virtual
model. The most likely reason for this is that security is defined at model
level, so this necessitates the provision of separate models for user groups.
Groups requiring a more overall view will generally work with virtual
models to avoid duplication of data and effort.
Models can be partitioned. All partitions share the same dimensions, but
each partition can have a different storage option and degree of
optimisation. As in relational databases, the rationale for partitioning is
performance gain.
OLAP Services provides a wizard-driven environment with the facility to
drop down into an editor to make alterations.
Main purpose
Architectural options
A major feature of the SQL Server 7.0 architecture is that although the DTS,
Repository, SQL Server engine and OLAP Services come as a complete
datamart package, each of these has an open API. This means that DTS can
feed into any OLE DB target, and OLAP Services can take data from any
OLE DB source and feed it to any OLE DB for OLAP consumer. Similarly,
within each of the components (that is, DTS and OLAP Services) the wizard-
driven interface is useful for defining 80% of the required functionality, and
what cannot be added with scripting can generally be added with a COM
component.
Desktop architecture
OLAP Services is the server part of a client-server solution, so the desktop
architecture using a two-tier model is not supported.
Mobile architecture
The mobile architecture is supported by PivotTable Services, a COM compo-
nent on the client; this enables drill-down and similar pivot table features.
In many ways, PivotTable Services is a ‘lite’ OLAP Services.
Client tools incorporating this component can load and cache data, and can
then disconnect from the data source. It is not possible to store this data
persistently on the client.
Web architectures
Most OLAP tools use a four-tier architecture for web access, based on CGI
with an OLAP web server between the generic web server and the OLAP
database. Microsoft does not include a web server in the SQL Server 7.0
package, but has two alternative means of giving the user web access using
the PivotTable Services COM component and Active Server Pages.
PivotTable Services
The use of this COM component results in what appears to be a thin client,
inasmuch as a browser interface is used, but is in fact a fat client with the
processing being carried out on the local machine.
When the user accesses the web page, the first thing that happens is that
Microsoft’s PivotTable Service COM component is automatically
downloaded. The data for the model is then downloaded and the browser
interface used to locally process this. There is no generation of HTML pages,
so the web connection can be severed and the user can still continue to
manipulate the data.
Active Server Pages
The second architecture makes use of Active Server Pages, Microsoft’s
proprietary server-side scripting technology on the IIS Web Server. (The
URL of the Active Server Pages ends in ‘.asp’ rather than ‘.html’.)
The Active Server Pages are made up of embedded HTML, and script that is
interpreted by the web server at runtime. Generally, the script will establish
the connection, passing the user ID and password to OLAP Services and
then issuing some MDX commands. The result of these is passed back to the
Internet Information Server (IIS), which generates the HTML for the
browser.
Independence of OLAP Services from the SQL Server 7.0 relational data store
OLAP Services is provided with SQL Server 7.0, so it is likely that many
users will use the two together. However, it is important to stress that OLAP
Services can access data in any database supporting OLE DB (and using the
OLE DB to ODBC mapper this includes ODBC-compliant databases).
Company background
History and commercial
Microsoft was founded in 1975 by Bill Gates and Paul Allen. Incorporated in
1981, it has become the largest independent software vendor in the world.
Fiscal 1998 revenues rose 27.5% to $14.49 billion, and net income increased
30% to $4.48 billion.
The style of Microsoft’s growth has been to combine internal product devel-
opment with the acquisition of companies or important personnel. If the
company perceives that a small company has developed a solution, it will
attempt to buy that company in its entirety. In the case of larger companies,
the same result is achieved by tempting away influential members of staff.
Microsoft entered the industrial-strength relational database market by first
licensing SQL Server from Sybase in 1987. In 1994, a new licensing agree-
ment gave Microsoft full ownership of the source code of its version, and a
clean break with Sybase.
In the OLAP area, Microsoft bought OLAP technology and the R&D team
from Panorama Software Systems in Tel Aviv, Israel in October 1996. In the
wider data warehousing market, Microsoft’s activities were geared towards
building up partnerships for its ‘Alliance for Data Warehousing’. During
1997, the most tangible activity was the announcement, in September, of the
beta specification of OLE DB for OLAP, a set of COM interfaces extending
OLE DB for access to multidimensional data.
In 1998, Microsoft delivered products that will ensure it is regarded as a
serious competitor in this area. The company has developed a ‘Data Ware-
housing Framework’ that combines interface specifications and products.
The interface specifications are OLE DB for OLAP (version 1 was made
available via the Web in February 1998) and the extensions of the Database
Information Model to cover the storage of metadata about data transforma-
tions and multidimensional business models in the Microsoft Repository.
Customer support
Support
SQL Server 7.0 comes with the standard Microsoft helpline and support
services.
Training
The company has stated that it has plans to support an extensive number of
short courses, but details are not yet available.
Consultancy services
Consultancy is not provided by Microsoft, but is available through its
partners.
Distribution
US
Microsoft Corporate Headquarters
One Microsoft Way
Redmond, WA 98052-6399
USA
Tel: +1 425 882 8080
Fax: +1 425 936 7329
Europe
Microsoft Europe
Microsoft Properties France
Tour Pacific
Cedex 77
92977 Paris-La Defense
France
Tel: +1 33 1 46 35 1010
Fax: +1 33 1 46 35 1030
Asia-Pacific
Microsoft Asia-Pacific headquarters
65 Epping Road
North Ryde, NSW 2113
Australia
Tel: +61 2 870 2200
Fax: +61 2 870 2769
http://www.microsoft.com/
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
The interface is easy to use and wizard assistance is available at all stages.
There are three levels to the design interface: the high-level wizard-driven
approach; editors for adding dimensions and calculated members; and the
programmatical enhancement through the addition of COM components.
Visualising the data source
When using the cube wizard to specify the data source, a sample of data is
shown.
Universally available mapping layer
There is no support for a universally available mapping layer.
Prompts for metadata
When the model is being built, there are no prompts for metadata about the
model or its components.
There is no direct mechanism for storing the metadata about the multidi-
mensional model in the Microsoft Repository.
1 2 3 4 5 6 7 8 9 10
User-definable extensions
The limited built-in functionality can be enhanced by writing the function as
a ‘.dll’ file and then registering it as a function.
Data mining
There is no support for data mining functionality.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
OLAP Services, by itself, does not provide web support. As with several of the
features considered in this evaluation, if they are required, they have to be
added on by the use of an appropriate third-party and/or end-user tool.
Some of the entry level tools offer web access using a COM component or
Active Server Pages on IIS. This enables users to explore models using
browser access, but there is no support for the creation of models, nor for
using the Web and the Internet as distribution mechanisms.
Management
Summary
1 2 3 4 5 6 7 8 9 10
OLAP Services has good support for managing size and partitioning the
data, but poor support for scheduling uploads and providing user security
and controls.
OLAP Services provides a published API, Decision Support Objects (DSO), to
control the management aspects of the tool. This is used, for instance, by the
cube-building wizards. Microsoft has not yet produced easy-to-use functional-
ity to support the scheduling of loads and updates, so users have to either do
this manually or write their own applications using the DSO.
The most obvious enhancement needed in this area is the provision of wizard
support for the management of data.
Management of models
Separate management interface
The management of data, models and users is carried out through a variety
of interfaces.
Security of models
OLAP Services relies on NT authenticating the user.
Query monitoring
Information on queries is via the Usage-Based Optimization Wizard, which
enables the administrator to select a model and a partition within it, and
can then see how many queries were made after a date, how many took
longer than a specified time and which queries are popular. While it suggests
which aggregates should be added or replaced, it does not suggest which
aggregates should be deleted because they are not used.
Management of data
How persistent data is stored (not scored)
OLAP Services offers three options for storing data:
• MOLAP (both aggregates and detail data are stored in the MDDB)
• ROLAP (both aggregates and detail data are stored in the RDBMS)
• HOLAP (aggregates are stored in the MDDB and detail data in the
RDBMS).
Scheduling of loads/updates
There is no direct support for this. Scheduling has to be done manually or
developers could write an application using the Decision Support Objects
API to control this. This requires programming skills.
Event-driven scheduling
There is no direct support for this. It could be done using DSO.
Failed loads/updates
There is no support for scheduling.
Distribution of stored data
Partitions of a model can be distributed on different servers.
Sparsity (only for persistent models)
Sparsity is handled ‘under the bonnet’ and the designer does not have to
make any decisions about it.
Methods for managing size
The Data Storage and Aggregation Wizard gives an excellent visual repre-
sentation of the effects of trading off size (a function of the amount of pre-
calculated aggregates) for performance gain. From this representation, the
user makes a choice.
In-memory caching options
There is no support for in-memory caching.
Informing the user when stored data was last uploaded
There is no support to enable end users to be informed of the currency of the
data.
Management of users
Multiple users of models with write facilities
Not applicable because write back is not supported.
User security profiles
There is no real support for defining user profiles, because user access to
models is limited to read or not. Security is controlled by creating different
models for different users. To minimise duplicated data, senior managers see
a virtual model made up of several physical ones.
Query governance
There is no support for query governance. While this is not a problem when
OLAP Services is in MOLAP mode, it may be required when used in ROLAP
mode.
Restricting queries to specified times
There is no support for this.
Management of metadata
Controlling visibility of the ‘road map’
There is no metadata other than the model itself. While there is a mecha-
nism to prevent users accessing the model, there is no improvement of the
security offered by NT to restrict the visibility of the model.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
The most notable strength of the tool with regard to adaptability is the ease
of changing the storage architecture. It is also easy to add dimensions and
measures. There is effectively no metadata to synchronise, which – although it
is a limitation in other areas – does at least make implementing the changes
straightforward. The tool is prevented from getting a higher score by the lack
of support to track and predict the impact of changes.
Metadata
Synchronising model and model metadata
Not applicable, because the only metadata held is schema details.
Impact analysis
There is no support for impact analysis.
Metadata audit trail (technical and end users)
There is no support to show the end user the history of the metadata. How-
ever, there is very little metadata so this is not significant.
Access to upstream metadata
There is no integration with extraction tools to access metadata generated
upstream.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
The administrator has most scope for performance tuning when the tool is
used in MOLAP mode. The most useful feature is the visualisation of the
relationship between database size and performance. In ROLAP and HOLAP
mode there is limited scope for performance tuning.
ROLAP
Multipass SQL
OLAP Services uses multipass SQL when in ROLAP mode.
Options for SQL processing
There are no options for specifying where the processing takes place. It is
always carried out on the database server.
Speeding up end-user data access
When in ROLAP mode, retrieved data is cached for the duration of the
session. When the cache is full, some data will be lost.
Aggregate navigator
There is no support enabling SQL queries to transparently make use of
summary tables.
MOLAP
Trading off load time/size and performance
One of the most impressive features of OLAP Services is its wizard support
for trading off the percentage of pre-calculated aggregates (size) against the
performance gain. This plots a graph in real-time showing the relationship
between the two, and the user can then choose what percentage of aggre-
gates to have.
As well as reducing the number of pre-calculated aggregates, reducing load
time can be done by limiting the recalculation of aggregates when new data
is entered, so that only aggregates affected by the new data are recalculated.
In OLAP Services, this is achieved by loading the new data into partitions
leaving the original data as it was. The drawback of this is that end-user
queries may then require access to multiple partitions. This can be coun-
tered by merging partitions, and will generally be done at weekends or when
the system is quiet.
Multiple users
OLAP Services has not been available long enough to establish the degree of
support for multiple users.
Processing
Use of native SQL to speed up data extraction
All data extraction is carried out using OLE DB for OLAP.
Distribution of processing
There is no support for distributing the processing between multiple OLAP
Services servers.
SMP support
The architecture is multi-threaded and can take advantage of SMP.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
The low score in this criteria reflects the absence of support provided by the
tool to develop customised interactive applications. However, if viewed as a
component within an application, then the ubiquity of the API and the low
cost makes it attractive to developers. It cannot be used to customise, but is
itself customisable.
OLAP Services is a component, with an open API, that can be used within a
customised application. The development of the application can be carried
out in any COM-compliant environment such as Visual Basic or C++, but
OLAP Services itself does not provide an environment for developing these.
Customisation
Option of a restricted interface
This is a feature of the end-user tool. It is generally not an option in the
entry-level tools considered in this evaluation.
Ease of producing EIS-style reports
There is no direct support within OLAP Services to provide a customised,
easy-to-use environment. It could either be achieved via an end-user tool or
by incorporating OLAP Services as a component within an application.
Applications
Simple web applications
There is no direct support to build simple EIS applications to run in a
browser.
Development environment
No OLAP-specific development environment is provided.
Use of third-party development tools
Applications that access data in OLAP Services using the OLE DB for OLAP
interface can be developed in any COM-compliant development
environment.
Deployment
Platforms
OLAP Services runs on Windows NT, end-user tools on Windows 95/ 98, and
NT.
Using Windows 95 or 98 is appropriate for personal use, but not as a server
for multiple users.
Data access
OLAP Services can access any data available via OLE DB or ODBC.
Standards
OLAP Services uses Microsoft’s OLE DB for OLAP API.
Published benchmarks
There are no published benchmarks for OLAP Services.
Price structure
SQL ServerOLAP Services is not available separately, but is included in the
Enterprise and Standard Editions of SQL Server 7.0.
At the time of writing (pre-launch), the price structure was not available, but
Microsoft states that it will be similar to SQL Server 6.5. (SQL Server NT
6.5 with ten-user licences costs approximately £1,500 in the UK.)
Microstrategy DSS
Product Suite
Summary
At a glance .............................................................................................. 3
Terminology of the vendor ....................................................................... 4
Ovum’s verdict ......................................................................................... 5
Product overview ..................................................................................... 6
Future enhancements ........................................................................... 14
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 33
Data access .......................................................................................... 33
Standards .............................................................................................. 33
Published benchmarks .......................................................................... 33
Price structure ....................................................................................... 33
Evaluation: Microstrategy – DSS Product Suite Ovum Evaluates: OLAP
At a glance
Developer
Microstrategy, Vienna, Virginia, USA
Version evaluated
DSS Product Suite version 5.5, consisting of: DSS Architect, DSS Agent, DSS
Server, DSS Administrator, DSS Web, DSS Broadcaster, DSS Executive and
DSS Objects
Key facts
• A ROLAP product with a client-based engine generating SQL
• End-user tools run on Windows 3.x, 95, 98, NT and OS/2; server runs on
Windows NT. Web access is supported
• Microstrategy is repositioning itself to provide commercial business
intelligence applications for the e-commerce market
Strengths
• Easy to build and access models if data is stored in a data warehouse
using a snowflake or star schema
• Strong data analysis and broadcasting capabilities via the Web
• Includes development tools for EIS applications, and an API for building
applications in an OLE-aware language
Points to watch
• Highly dependent on the design and processing capabilities of the data
warehouse for performance
• Limited support for specialised analysis – relies heavily on Excel for
analytical capabilities
• Large implementations involve a wider data warehousing consideration –
a significant amount of consulting is usually required
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Ovum’s verdict
What we think
The DSS tools are well integrated and are becoming increasingly web-
focused. They are also highly dependent on a data warehouse for design and
performance. Any purchase decision will therefore involve a wider data
warehousing consideration. Large implementations usually come with a lot
of consulting, with rollouts taking from 6 to 18 months to complete.
The ROLAP architecture makes the DSS tools well suited to the routine
analysis of large volumes of data. The end-user tools are easy to use and
provide most of the functionality required by casual and regular OLAP
users. DSS Architect’s well designed interface simplifies the mapping of
relational to multidimensional structures.
DSS Broadcaster is a major component of the toolkit. It provides excellent
support for distributing information in a timely manner via the Web and
other wireless channels. The sophisticated reporting and broadcasting
capabilities are in line with Microstrategy’s belief in the commercial benefits
of distributing (and selling) analytical information to a wide range of users.
However, the significant investment required to structure the data in order
to take advantage of the features of the toolkit leads to a degree of lock-in.
While the snowflake schema does not preclude using other reporting and
query tools, it is unlikely to be the preferable design schema if other tools
are to be used. However, the advantage of the Microstrategy approach is that
the company can provide full support for building, maintaining and using a
data warehouse for OLAP.
Although DSS Web exploits the Excel function library, the client-server tools
offer limited support for complex and specialised analytics. This is primarily
a consequence of the ROLAP architecture, which uses specialised analytics
created and stored in the data warehouse, rather than creating them within
the OLAP tool. Microstrategy has yet to introduce DSS Server as an OLE DB
for OLAP data provider; this limits end users to the vendor’s own front ends.
When to use
The Microstrategy tools are suitable if you:
• have already built a data warehouse using a snowflake schema
• have customer management or other applications where the data
volumes are large and there are many members in each dimension
• need good broadcasting support using a variety of devices.
It is less suitable:
• as a departmental OLAP solution, because of its dependency on the data
warehousing architecture
• if you require many different or rapidly changing business models
• if you want to develop highly customised OLAP applications
• if you require specialised and highly complex metrics beyond those
provided by Excel.
Product overview
Components
The main components of DSS Suite (all version 5.5) are:
• DSS Architect
• DSS Agent
• DSS Server
• DSS Administrator
• DSS Web
• DSS Broadcaster
• DSS Executive
• DSS Objects.
Figure 1 shows whether the component runs on the client or the server, the
stage at which it is typically used, and its primary function.
DSS Architect
This is a developer tool used to build multidimensional models and define
the mappings to the physical database schema. It is also used to specify how
the model appears to the user.
The Microstrategy solution is closely tied to the dimensional design of the
data warehouse, so the manual gives extensive advice about the appropriate
warehouse schema to use. This generally involves using the snowflake
schema, although a star schema can be used for simplicity (but will not give
such good performance).
DSS Agent
An end-user tool used for report development (for example, applying filters
and templates to the model defined in DSS Architect to specify a report) and
ad hoc analysis of the business model.
DSS Agent generates the SQL to retrieve data from the data source and can
be configured in two-tier mode, with direct access to the data warehouse, or
three-tier mode, with DSS Server as an intermediary. It also integrates with
ETL tools to allow users to view metadata from data warehouses.
DSS Server
This is the core server component for the DSS Suite. SQL queries, whether
generated by DSS Agent or DSS Web, are passed through this server. The
server redirects queries to either the source database (where all processing
occurs) or to cached datasets.
DSS Server provides three tools:
• Scheduler – uses scheduled agents to refresh caches and reports based on
time- or event-driven criteria. The scheduling component can be run as a
Windows NT Service
• Cache Manager – a graphical console for monitoring report caches
• a runtime management environment – allows information to be displayed
about jobs that are running. A transaction log is provided to store
statistical information about system usage and SQL queries.
DSS Administrator
This is the main administration component for the DSS Suite. It consists of
two main interfaces:
• Warehouse Monitor – provides information about performance trends,
usage trends and statistics by user, report and time. It can be used to
assist performance-tuning of the data warehouse (for example, the need
for aggregation tables and indexing), and to determine the best time to
schedule jobs and cache results
• Object Manager – controls the management of users and DSS Suite
objects (such as reports, templates, filters and measures). It can be used
to generate user and group profiles, and to define access rights to
application objects and system functions.
DSS Web
This is a web server that enables a web browser to be used as a thin client
(an alternative interface to DSS Agent), as well as providing an environment
for developing an EIS or a customised front end. Development is achieved
with a combination of HTML and JavaScript, with some Java applets.
DSS Web requires a four-tier architecture. Messages are passed from the
browser to the standard Internet server (for example, Microsoft’s IIS), then
to DSS Web Server where the query engine generates the necessary SQL,
which is passed to DSS Server via RPC and then on to the data warehouse.
DSS Web is licensed either for full functionality (DSS Web Professional) or
as a viewing tool for pre-built reports, but without the facilities to drill
anywhere or create reports (DSS Web Standard Edition).
DSS Broadcaster
This is an information distribution and broadcasting server. DSS
Broadcaster uses reports created in DSS Agent or DSS Web. It manages the
distribution of these reports to a variety of end-user devices, including
mobile phones, pagers, PDAs and fax terminals, and via e-mail clients. DSS
Broadcaster includes a sophisticated HTML generation engine that exploits
XML (extensible mark-up language) and XSL (extensible style-sheet
language), to deliver formatted and highly functional e-mails.
DSS Broadcaster’s administration console offers a number of graphical tools
to manage content and set up the distribution environment. Reports can be
scheduled or event-driven, and dynamic distribution lists can also be defined.
DSS Executive
This is an easy to use development environment that is used to create EIS-
like front-end interfaces for casual users of DSS Agent. It provides a number
of ready-to-use EIS objects (buttons and icons) that can be mapped on to
reports to create simple briefing-book applications.
DSS Objects
This acts as an interface to DSS Server that allows for the development of
custom applications using OLE-enabled application development languages
(such as Visual Basic, Visual C++, Visual Basic for Applications and Delphi).
The API provided in DSS Objects enables the application to make high-level
function calls to the query engine and server, making use of predefined
metrics, templates and filters. The developer can create user-defined filters
and templates using the API. There are, however, no ready-to-use components
that provide a GUI interface for creating these objects when building an
application.
Architectural options
Full mid-tier architecture
Microstrategy only supports a ROLAP approach.
Desktop architecture
Running DSS Agent directly against the data source is the simplest two-
tiered configuration. DSS Agent generates the multipass SQL, which is
processed on the database, and the resulting datasets are manipulated in
DSS Agent. This configuration is usually only used for small projects and
testing.
Mobile architecture
There is no direct support to download a subset of data from the relational
database and run queries against this. By definition, ROLAP tools are not
geared to support this architectural configuration.
Feeding a datamart
The toolset provides the option of feeding retrieved data into a relational
database on the network. Microstrategy calls this ‘dynamic datamarting’. It
is used as a means of passing data to other tools for further analysis.
Division of responsibility
The tool lends itself to a clear division of responsibilities between designer
and user. The mapping layer is defined by the designer using DSS Architect,
and reports are defined by the user in DSS Agent.
Within DSS Architect, the designer defines the information that can be used
to build multidimensional business models, maps this on to the source data
and specifies how it is presented to the user.
A report is based on a project defined in DSS Architect. The report adds
templates (which define the slice of the model to be used) and filters (the
rows to be included). In DSS Agent, users can build new reports, run previ-
ously created reports or run ad hoc queries.
A selected filter can then be modified if required. The results of a report can
be viewed in grid, graph, map or alert mode. For alert mode, data meeting
previously specified criteria is displayed with headlines, optional headers
and optional footers.
Information broadcasting
A key differentiator of DSS Suite is the importance it places on ‘push’ tech-
nology (Microstrategy calls this ‘broadcasting’) in distributing information to
large communities of users, both inside and outside an organisation.
DSS Broadcaster is an information broadcast server component that enables
large-scale report distribution. It includes a web-based self-subscription
interface through which users can register themselves, subscribe to services
and choose device types for delivery of information. Wizards are provided to
guide administrators through the configuration of personalised broadcasting
criteria, such as schedules, styles (using content filters) and threshold
conditions.
Future enhancements
The next major release of DSS Suite is scheduled for late 1999. The new
version will support a new server-centric architecture, with thinned down
clients, allowing easier deployment and centralised management. The web
tools will exploit XML and XSL technology for report generation and format-
ting. Microstrategy has given no further details of the new release; formal
announcements are expected in mid-1999.
Microstrategy is still considering support for Microsoft’s OLE DB for OLAP
as a data provider support, but it has not announced a definite date for
delivery.
Commercial background
Company background
History and commercial
Microstrategy was founded in 1989 by the existing president, Michael Saylor.
The company originally offered consultancy rather than products. It built
custom decision-support applications for large companies such as DuPont,
Merck and Xerox.
DSS Agent was released in 1994, and was the first in a line of decision-
support software based on ROLAP. In 1995, DSS Server and DSS
Administrator were released. The first web program was DSS Web, which
was released in 1996; DSS Broadcaster was released in August 1998.
In 1999 the company restructured itself into three principal business units:
• Business Intelligence: focuses on traditional business intelligence
solutions using its ROLAP tools and a central data warehouse
• Commercial Intelligence: develops decision support applications to
support business-to-business/customer/partner information provision,
focusing specifically on the e-commerce market
• Consumer Intelligence: provides business-to-customer information
delivery applications and services to industry sector consumers such as
telcos and ISPs. These applications are targeted directly at information
consumers and revenue is derived mainly from advertising and
subscription fees. One example is DSS Stockmarket, which delivers
personalised stock market reports to subscribers
Microstrategy is one of the fastest growing software companies in the OLAP
industry. The company was entirely self-financing until it went public in
June 1998, which raised $48 million. Microstrategy now employs over 900
people. It has its headquarters in Vienna, Virginia with 27 offices and a
worldwide network of VARs and distributors. Microstrategy has had an
impressive growth rate since going public. Revenues for 1998 increased by
99% to $106.4 million, and net income was $6.2 million. While originally a
consultancy-based company, Microstrategy now generates around 70% of its
revenue through software.
Customer support
Support
Microstrategy offers worldwide support at various levels according to the
maintenance agreement in place. This includes 24×7 and full support account
management.
Support is provided through two major centres located in Washington DC,
US and Slough, UK. Local support numbers are available for all supported
countries.
Training
Microstrategy runs a variety of courses for its customers and partners,
including a foundation and advanced course (one day each), a two-day data
warehouse design course and a three-day installation and management
course.
The company also provides a partner programme aimed at systems integra-
tors, VARs, OEMs, partners and distributors, comprising all the above
training plus certification.
Consultancy services
Consultancy is available directly from Microstrategy, and from partners such
as Andersen Consulting and Renaissance Worldwide.
Distribution
US
Microstrategy
8000 Towers Crescent Drive
Vienna
Virginia 22182
USA
Tel: +1 703 848 8600
Fax: +1 703 848 8610
Europe
Microstrategy
St Martin’s Place
51 Slough Rd
Slough
Berkshire SL1 3UF
UK
Tel: +1 44 1753 826100
Fax: +1 44 1753 826101
Asia-Pacific
Microstrategy
41 Dillon Street
Paddington
Sydney, NSW
Australia
Tel: +61 2 9360 0240
Fax: +61 2 9331 3542
http://www.strategy.com
E-mail: info@strategy.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
End users can easily access the multidimensional model via a desktop or web
interface. Flexible access via the desktop is facilitated by the ability to select a
‘high level’ or ‘analyst’ interface. In general, the end-user interface provides
most of the features that OLAP users expect in a well designed GUI.
DSS reports provide a range of sophisticated presentation features, including
mapping capabilities. DSS Broadcaster also adds some very useful publishing
capabilities for easy and flexible distribution, including the ability to dynami-
cally define address lists.
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
The mapping layer for the reports is defined in DSS Architect using an
intuitive graphical interface. All aspects of the mapping process – establishing
connectivity, selection of warehouse tables, identification of fact columns and
the definition of dimensions and metrics – is simplified with point-and-click
functionality.
A report (which is a set of dimensions, measures and filters) can be generated
from this using a Report Wizard. The design interface, in all cases, is simple
and easy to use.
Visualising the data source
The tables in the source database are listed and the structure of each can be
optionally displayed. A sample of data from the tables can be viewed using
the component window.
Universally available mapping layer
There is partial support for this, because the project serves as a mapping
layer. This stores the metadata that describes the relationship between the
logical model and the database. However, this layer is not universally
available.
Multiple designers
Multiple designers
Only users with the appropriate security can edit objects, but within this
group there is no mechanism to support multiple designers simultaneously
working on objects.
1 2 3 4 5 6 7 8 9 10
Financial functions
There are no specialised financial functions provided by DSS Agent. Some
support is available in DSS Web (via Excel).
Statistical functions
This is not provided by DSS Agent. It is only supported in DSS Web through
its integration with Excel.
Trend analysis
This is not provided by DSS Agent. However, DSS Web users can use Excel
for simple trend analysis.
Simple regression
This is not provided by DSS Agent. However, DSS Web users can use Excel’s
linear regression techniques.
Time-series forecasting
There are no specialised time-series forecasting functions.
User-definable extensions
There is no scripting language available to extend the range of functions.
However, there is considerable flexibility to build and re-use new measures
in three ways:
• using the function builder interface allows users to create their own
functions and apply them to a data series for display in report writer
mode
• combining basic facts from the data warehouse in reports to build more
advanced metrics (for example, combine the ‘inventory’ and ‘sales figures’
to calculate new measures such as ‘turnover’ and ‘sell-through’)
• embedding conditions inside calculations, and then adding qualifications
at runtime or building ‘self-adjusting’ measures that can be re-used
across the enterprise and multiple reports.
Datamining
There is no support for datamining.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Microstrategy offers good web support via two products: DSS Web and DSS
Broadcaster. DSS Web has several modes: it provides a development environ-
ment, acts as a server enabling thin client creation and access to sophisticated
reports, and provides administrative control of web usage. It also integrates
with the Excel function library to provide greater analytical capabilities than
the DSS Agent client.
DSS Broadcaster is designed to distribute reports to users. A unique feature
is the ability to distribute these to a range of different devices simply by using
a pull-down menu.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
There are two management interfaces:
• DSS Server – for realtime management
• DSS Administrator – for monitoring and tuning the system.
Both provide graphical interfaces and most administration operations are
supported by point-and-click and drag-and-drop.
Security of models
The Microstrategy tools are based on the assumption that most of the
security functions are provided by the database used for the data
warehouse. Within the toolset there is limited support provided by the log-
on ID of the creator of an object determining whether it can be seen by a
group or an individual. The rationale of this facility is to share, rather than
create, a secure environment.
Query monitoring
DSS Administrator’s Warehouse Monitor provides an easy-to-use interface
for a range of useful monitoring information, including a list of most
frequently used reports, a breakdown of web and non-web usage, user
statistics (such as resource consumption and data volume per user) and
information for load balancing.
The information obtained includes:
• table-hit frequency – to identify aggregate table utilisation and
partitioned table utilisation
• individual query statistics – such as total query execution time, SQL
generation time, queue time and SQL execution times.
Management of data
How persistent data is stored (not scored)
The Microstrategy toolkit does not automatically create persistent versions
of all models, but there is an option to cache data. In this case, a cache is not
a volatile store but a file that can be saved either locally or on the central
server. The two advantages of this are that it:
• reduces the processing load on the data warehouse
• speeds up retrieval time.
Scheduling of loads and updates
The default schedule for refreshing the cache can be specified in DSS Agent,
DSS Web and DSS Broadcaster. The schedule includes details about where
the cache is to be stored and its duration. The units used to define the expiry
date are days, weeks, months and years.
DSS reports can have individual refresh schedules.
Event-driven scheduling
Administrators can create a schedule that refreshes the cache after certain
events; for example, a data warehouse load, a general ledger update or a
critical threshold within the data warehouse.
Failed loads and updates
There is no support to inform the administrator that a refresh schedule has
failed.
Distribution of stored data
Caches can be optionally stored on the client or server. A mixture of caching
options is supported.
Management of users
Multiple users of models with write facilities
This is not applicable: write facilities are not provided.
User security profiles
There is limited support to define security profiles for individuals or groups
from within the tool. Users can, optionally, be allocated to groups by adding
lines to the configuration file, and this will define the objects (that is, reports,
metrics, templates and filters) that they can see. This is, however, more of a
convenience than a security feature.
Most of the security functions are provided by the database.
Query governance
DSS Agent generates a report cost estimate, which can be viewed by the
user, and is also passed to DSS Server for governing and prioritising queries.
A time warning can be attached to a report that generates a message box if a
threshold is exceeded.
Within DSS Server, the emphasis is on governing the overall set of query
processes rather than particular users. These include size limits on number
of rows in a results set, time-outs for long running queries and the maximum
concurrent jobs per user.
Restricting queries to specified times
There is no facility to limit queries to particular times of the day or week.
Management of metadata
Controlling visibility of the ‘road map’
There is some control over the visibility of the metadata because, in the first
instance, objects can only be seen by their creator, and then Object Manager
is used to share the visibility. If the designer logs in as an individual user
rather than a system user, this will limit the original visibility.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
The metadata is generated by DSS Architect (and stored as a project) and
the multidimensional business models (known as reports) are created from
this using DSS Agent. If some of the fields used in a report are removed
from a project, the report will run – but without the relevant fields. The user
is not informed of what has been lost.
Impact analysis
There is no support to inform the administrator of the effect that changing
the structure of the data warehouse will have on reports. However, Object
Manager lets administrators assess the impact of changes to a metric defini-
tion on reports.
Metadata audit trail (technical and end users)
There is no support for a metadata audit trail.
Access to upstream metadata
Integration with ETL products enables the developer to view extraction and
transformation metadata about the columns in the data warehouse that
provided data for the measures. However, caching has to be disabled for this
to work.
Metadata can be accessed from a range of ETL tools, including Informatica,
Acta, Ardent, Constellar, D2K, ETI, Prism, Relational Matters and
Systemfabrik.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
DSS Server uses multipass SQL.
Options for SQL processing
The processing of SQL queries is always carried out on the source database
(usually a data warehouse). DSS Server can take full advantage of database-
specific optimisation techniques such as hash and start joins, SQL extensions
and predicate clause optimisation.
Speeding up end-user data access
Non-volatile caches are used. The refresh schedule for these is defined in
DSS Agent.
The information provided by Warehouse Monitor can be used to define an
aggregate strategy in the data warehouse that would speed up user access.
The tool depends on database features (such as table partitioning) to help
speed-up data access.
Aggregate navigator
DSS Server is aggregate-aware. It automatically directs queries in order to
reference smaller, precalulated aggregate tables, thus improving performance.
For queries that require an aggregate table that does not exist, DSS Server
performs a ‘tree-walk’ to reference the most appropriate table from which to
calculate the aggregation.
MOLAP
DSS Suite does not support a MOLAP architecture.
Processing
Use of native SQL to speed up data extraction
DSS Server uses 16- and 32-bit ODBC connectivity to connect to leading
relational databases. The queries use the pass-through facility of ODBC to
use native SQL to retrieve data.
Distribution of processing
The processing of the SQL queries is always carried out on the source data-
base (typically the data warehouse). The administrator can use DSS Server
in realtime to tune the number of processes simultaneously running on the
data warehouse. Dynamic database thread allocation and management is
supported to prevent overloading of the data warehouse.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
This is well supported by a selection of interfaces in DSS Agent. When DSS
Agent is opened, users can elect whether to work in EIS or DSS mode. The
range of interfaces supports simple, intermediate, advanced and custom
functionality.
Ease of producing EIS-style reports
The production of EIS reports is supported by DSS Executive. An EIS is a
guided tour through a project with limited support for analytical exploration.
Using point-and-click, the important objects (buttons, images, labels, grids,
graphs and textboxes) are positioned and their properties specified. This
feature is well supported in an easy-to-use, drag-and-drop environment.
Applications
Simple web applications
Web applications are developed via DSS Web, using HTML authoring tools,
JavaScript or Visual Basic Script.
Development environment
There is no provision of a development environment specialised for OLAP
applications and providing ‘OLAP-aware’ components.
DSS Executive only supports standard EIS objects and visual layout func-
tions that can be used to build briefing books and simple EIS front ends.
Deployment
Platforms
Client
Clients run on Windows 3.x, 95, 98, NT and OS/2. Web access requires
Microsoft Internet Explorer or Netscape Navigator browsers.
Server
The server-based tools (DSS Server, DSS Web and DSS Administrator) run
only on Windows NT.
Data access
DSS Servers uses ODBC to connect to the following relational databases:
Oracle, Informix, Teradata, Tandem, DB2, Sybase, Microsoft SQL Server,
Red Brick, ADABAS D and Microsoft Access. The queries use the pass-
through facility of ODBC to use native SQL to retrieve data.
There is no direct support for accessing specialised data sources such as SAP
BW and ERP operational data. This is regarded as a responsibility of the
database rather than the OLAP tool and is achieved mainly through the use
of ETL partners such as Prism, Informatica, ETI, Acta and Systemfabrik.
Standards
DSS Objects offers an OLE-based API for software development using a
COM-compliant language.
Published benchmarks
Microstrategy has not participated in any external benchmarking tests.
However, it has conducted its own internal ‘stress test’ benchmarks.
Price structure
Server pricing
Pricing for server components, including support for up to 50 users, is as
follows:
• DSS Server – $21,125
• DSS Web – $11,375
• DSS Broadcaster – $11,375.
Pricing for all servers increases incrementally with user-number categories
(for example, the next category is 51–200 users).
Development tools
The development tools are priced on a per-user model:
• DSS Architect (one user) – $9,750
• DSS Executive (one user) – $9,750
• DSS Administrator (one user) – $19,500
• Development Bundle (two users) – $45,500.
Interfaces
The interface components are priced on a per-user basis, but discounted
rates can apply according to total deal volume:
• DSS Agent (one user) – $1,670
• DSS Objects (one user) – $1,335
• DSS Web PE (one user) – $1,335
• DSS Web SE (one user) – $830
• DSS Broadcaster (one user) – $495.
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 5
Future enhancements ........................................................................... 11
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 26
Data access .......................................................................................... 26
Standards .............................................................................................. 26
Published benchmarks .......................................................................... 26
Price structure ....................................................................................... 26
At a glance
Developer
Oracle, Redwood Shores, CA, USA
Versions evaluated
Oracle Express Server, version 6.2; Oracle Objects and Express Analyzer,
version 2.2; Oracle Web Publisher, version 2.0
Key facts
• A MDDB with an integrated development environment. Can also be
configured for ROLAP
• Web- and Windows-based clients and Unix- and NT-based server
• Oracle also produces sales and financial OLAP applications
Strengths
• A mature MDDB engine with a range of financial and analytical
functions
• An easy-to-use – but extensible – development environment, which can
be used to create EIS-style and industrial strength applications
• A complete package, including an end-user tool that can support power
users and has a web publishing component
Points to watch
• Requires 4GL coding skills to supplement the GUI environment; for
instance, to specify security levels for database objects
• Limited support for the publication and distribution of reports
• Not yet seamlessly integrated with other Oracle business intelligence
products
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Terminology of the vendor
Formulae
Formulae are calculated measures, but they are created dynamically and not
stored.
SPL
The stored procedure language, also known as Express language, used in
Express Server. It is a 4GL-like language used to define analytical functions
and some Express operations.
SNAPI
Structured n-dimensional API. It is a C language interface for accessing
Oracle Express Server and Personal Express, and used when developing
applications in C/C++ or languages that support calls to C functions in DLLs
such as Visual Basic. It is used by front-end tool vendors such as Business
Objects and Cognos to access Express data.
Variable
The Express term for a measure. A variable is cross-referenced only against
selected dimensions.
Ovum’s verdict
What we think
Oracle Express Server and its associated components have notable strengths
on several fronts. Its multidimensional database is a mature product that
combines GUI ease-of-use with the fully featured Express language giving
procedural control. The application tools, Oracle Express Objects and Oracle
Express Analyzer, enable applications and reports to be quickly prototyped
and more fully developed if required. Underpinning all aspects is a wide
range of analytical functions, particularly in the financial and forecasting
area. A further strength is the possibility of changing the data storage
architecture from MOLAP to ROLAP if the requirements change.
There is good, but not yet seamless, integration with other Oracle business
intelligence tools (such as Oracle Reports and Discoverer) and partnerships
with data mining vendors if this type of analysis is required.
Although generally Express is a GUI-based environment, it requires coding
in some important aspects, such as the specification of user access controls.
Express could be strengthened by the addition of more comprehensive
publication and distribution facilities.
Overall, however, the tool offers an excellent scalable and extensible data-
base and development environment, with the support facilities of a major
database player.
When to use
Oracle Express should be on your shortlist if you:
• are developing medium- to large-scale projects requiring customised
applications of varying complexity
• have complex analytical requirements where the ability to write new,
complex functions is important
• wish to enhance and further develop one of Oracle’s packaged
applications (such as Financial Analyzer or Sales Analyzer).
It is less suitable if you:
• want an ‘out-of-the-box’ solution for end users
• do not have access to the coding skills necessary to fully exploit the tools
• want sophisticated publication and distribution support.
Product overview
Components
The main components of the Oracle Express Development toolkit are:
• Oracle Express Server version 6.2
• Oracle Express Objects version 2.2
• Oracle Express Analyzer version 2.2
• Oracle Web Publisher version 2.0.
Figure 1 shows whether the component runs on a client or server and its
primary purpose.
Main purpose
Architectural options
Oracle Express Server can be configured to support multiple architectures.
Java objects
Oracle’s integration strategy is focused on Internet computing, and the
company sees open and portable components as critical to this approach.
Oracle has begun by re-engineering the decision support clients into a
collection of objects in the form of JavaBeans.
Data warehousing
A major Oracle goal is to provide an environment to cover all phases of
warehouse definition and operation, including integrated access tools.
Initiatives include further data warehousing support in Oracle 8.x and closer
integration with Oracle Express Server.
Metadata standard
Oracle has been participating in an industry initiative to provide a common
warehouse metadata standard. The company intends to use this in all future
developments, thus integrating Oracle’s decision support tools at the
metadata level.
Company background
History and commercial
Oracle was founded in 1977 and in 1979 brought the first SQL-based com-
mercial relational database system to market. The company has grown year
on year since its inception. In the early 1990s, it appeared to be faltering, but
made its comeback under the direction of Ray Lane, who was promoted as
Oracle’s president of worldwide operations in October 1993 and is now the
chief operating officer.
Revenues grew by a factor of four between 1991 and 1996, while income
before tax expanded almost tenfold between 1992 and 1996. In the fiscal
year 1997, revenue grew 35% to $5.7 billion, and in the year ending May
1998 the revenues were $7.1 billion, up 17% on 1997 and profits for the year
were $955 million compared to $845 million in 1997.
A breakdown of 1998 results points towards the main growth being in the
services arm, Oracle Consulting, which comprises half Oracle’s business. On
the software side, database software did better than applications.
Express was originally owned and developed by Information Resources (IRI).
In 1985, IRI acquired Management Decision Systems, a consulting company
that had developed the first commercially available multidimensional
database, Express, which was first released in 1972.
In 1995, Oracle purchased the Express technology from IRI Development
still continues in IRI’s original base at Waltham, MA. At that time, approxi-
mately 600 of IRI’s 900 staff joined Oracle.
Ovum estimates that Oracle’s annual revenue from Oracle Express product
suite licences (including packaged applications) is in the region of $180
million, with services and support bringing the total revenue in this area to
$250 million.
Customer support
Support
Support is not included in the package, but offered as an additional service.
Support programs include telephone support, around-the-clock coverage, on-
site assistance, dedicated support account managers and priority call
handling.
Support is payable annually and is charged in line with software licensing
(that is, per named or concurrent user). Prices are available upon request.
Training
Oracle Education has 217 education centres in 63 countries, offering a
variety of courses.
Consultancy services
Consultancy is not packaged with the product. Service support is available
from Oracle Partners as well as Oracle Consulting.
Distribution
US
Oracle
500 Oracle Parkway
Redwood Shores
CA 94065
USA
Tel: +1 415 506 7000
Fax: +1 415 506 7200
Europe, Middle East and Africa
Rijnzathe 6
NL 3453 PV De Meern
The Netherlands
Tel: +31 30 669 4211
Fax: +31 30 666 5603
Asia-Pacific
5 Tamasek Boulevard
#15-03 Suntec City Tower
Singapore 038985
Tel: +65 337 3797
Fax: +65 337 6109
http://www.oracle.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
The end-user functionality depends on the tool used to access the model or
application. Here, we mainly assess the functionality offered by Analyzer
when accessing a multidimensional business model. Although the model is
easy to use and offers the expected range of drill-down and pivot features, the
product is prevented from getting a higher score for this criteria by the lack of
support for cataloguing, publishing and distributing models and reports. The
tool could benefit from absorbing some of the features of Oracle Reports.
(There is no integration between the two tools.)
1 2 3 4 5 6 7 8 9 10
The tool provides a flexible and powerful interface to define the business
model, and prototypes can be quickly built using the database wizard. It
offers a full range of features for building the multidimensional business
model. The main ways in which the tool could be enhanced are through more
structured support for the collection of metadata (to collect richer informa-
tion) and by the provision of version control.
Basic design
Design interface
The business model is defined using dialogue boxes and point-and-click.
Visualising the data source
Source data can be seen.
Universally available mapping layer
There is no direct support for a universally available mapping layer.
Prompts for metadata
The developer is prompted for short names (for graphs) and a longer, more
descriptive, term for objects such as dimension levels and measures.
Multiple designers
Multiple designers
The need for control applies both to the database and the development
environment. In the database, only one user can have write access at any
one time. Oracle supports multi-write access to its Financial Analyzer
application.
Within the development environment, designers work on project libraries.
Two or more users work on project libraries, which are then merged and
compiled into a single project.
Support for versioning
There is no support for versioning in Oracle Express.
1 2 3 4 5 6 7 8 9 10
Data mining
Data mining is not supported.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
The Oracle Express development tools enable users to access models using
Web Agent (which is bundled with Express Server) and to publish pages
using Web Publisher. As with most OLAP tools, there is support for web access
but not for the creation of new models.
The main focus of web support is the publication of a web page. There is no
support for using the Internet for personalised distribution.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
Many of the management tasks are done using Oracle Express Administra-
tor. Scheduling, however, is organised using the Express Batch Manager.
Security of models
Security is defined using both the operating system and Oracle Express’s
functions. It is defined using the Express language.
Query monitoring
Through the ‘query statistics’ option, the administrator can get information
about performance statistics and about how the levels in the dimensions are
being used and infer the need for summary tables.
Management of data
How persistent data is stored (not scored)
When Express is used in MOLAP mode it is stored in a multidimensional
database. When it is used in ROLAP mode it is stored in a cache in the
MDDB.
Scheduling of loads/updates
Scheduling is supported through Express Batch Manager (a graphical utility
to create, monitor and control batch processes) which comes with Express
Server.
Event-driven scheduling
Event-driven scheduling, contingent on flags, the existence of a file or time-
stamps, is supported by the administrator writing scripts in the Express
language.
Failed loads/updates
If an upload fails there is error reporting, but there is no facility to specify
that the schedule is automatically re-run.
Distribution of stored data
The database can be stored wherever the developer wishes.
Sparsity (only for persistent models)
Oracle Express has a method to handle the indexing of sparse measures, but
does not offer wizard support for this.
The simplest method of reducing indexing, if the measure is sparse in only
one direction, is to specify this as the last dimension. This will reduce the
size of the database because pages containing ‘n/a’ values are not saved.
If sparsity is more randomly distributed the measure is defined as a ‘sparse
variable’ along one or more of its dimensions. When a measure is defined as
sparse in this way the system automatically creates a ‘composite’, which is
the list of dimension value combinations that provides an index into one or
more sparse variables. For efficiency, measures can share composites so that
one combination of dimension values is used to access more than one meas-
ure. If the sparsity patterns are different, individual composites can be
defined.
By default, composites are created using BTREE algorithms but, optionally,
a HASH method can be chosen. There is no system support to help the user
see the benefits of the alternatives.
Methods for managing size
There are two main ways of managing the size of the database:
• the multi-cube architecture reduces the size, as cubes are generally
comprised of dimensions that are densely populated with regard to each
other
• by processing calculated measures as required.
There is no wizard support for these administrative decisions.
In memory caching options
No options are available.
Informing the user when stored data was last uploaded
There is no direct support to inform the user about when the data being
accessed was last uploaded.
In ROLAP mode the data may be freshly retrieved, cached during the user
session or cached more permanently. Using Analyzer for ad hoc queries, the
user is not aware of the type of cache in use.
Applications built using Express Objects can include information to inform
the end user of the currency of the data.
Management of users
Multiple users of models with write facilities
As described in Building the business model, only one user can write to the
model at any one time.
User security profiles
User access is defined using the Express language.
Query governance
Query governance is primarily needed when the tool is used in ROLAP
mode. There is no direct support.
Restricting queries to specified times
There is no support for this.
Management of metadata
Controlling visibility of the ‘road map’
The visibility of models and their metadata can be controlled using ‘permit’
commands in the Express language.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
This is not an issue because there is little metadata to synchronise.
Impact analysis
There is no direct support for impact analysis.
Metadata audit trail (technical and end users)
There is no support for this.
Access to upstream metadata
There is no integration with third-party tools to give access to metadata
generated at an earlier stage.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
Express Server can operate in both MOLAP and ROLAP mode, so it could
potentially be finely tuned for both approaches. As expected, its tunability
strengths are as a MOLAP engine.
The appropriate design of multi-cubes can enhance performance, but there is
no automatic support for this. The tool supports SMP.
One weakness of the tool, when used in ROLAP mode, is that the users have
no direct way of knowing how long the data they are viewing has been
cached.
ROLAP
Multipass SQL
Multipass SQL is supported.
Options for SQL processing
The SQL statements are always processed on the database host machine.
Speeding up end-user data access
Retrieved data is stored in a cache in Express. Caching options are:
• transient cache, in which the data is held in an Express cache for the
duration of the user session
• do not cache, but always query the RDBMS
• permanent cache – all or some levels of the data are permanently stored
in the Express cache.
It is expected that the user will know what form of cache is being used
because it will be a result of their requirements, thus there is no direct
means of indicating the currency of the data to the end user.
Aggregate navigator
Express can transparently make use of summary tables. Information about
summary tables is entered using Express Relational Access Administrator.
MOLAP
Trading off load time/size and performance
The multi-cube architecture assists in size reduction.
When specifying the cube definition in Express Administrator, any measure
defined as a formulae (a calculated measure) can be pre-calculated or not.
Customisation
1 2 3 4 5 6 7 8 9 10
Summary
Oracle Express and its associated products provide excellent support for
application development. Reports with multidimensional features can be
developed using Oracle Express Analyzer’s visual development environment.
Using Oracle Express Objects, fully featured applications can be developed
combining ease of use with the power of a procedural language. Finally, Web
Publisher enables ‘browser aware’ applications to be created.
Customisation
Option of using a restricted interface
There is no direct support for users to use a restricted interface.
Ease of producing EIS-style reports
Using the visual development environment in Oracle Express Analyzer,
users can create a customised report with EIS-type functionality.
Applications
Simple web applications
These can be developed using Web Publisher.
Development environment
Oracle Express Objects is an environment and a set of components. It uses
Express Basic to give programmatic control. In style it is similar to Visual
Basic, with the addition of multidimensional objects such as table objects
and graph objects and dimensionally aware list boxes. The multidimensional
aspects are defined through a database browser.
It is an object-oriented environment offering inheritance and polymorphism.
The development environment can be enhanced by the creation of new
objects, which can then be added to the toolbox.
An application is developed as a series of pages.
Use of third-party development tools
SNAPI provides a C language interface to Express Server, and can be used
to write programs in C/C++ or any other Windows programming environ-
ment (such as Visual Basic or Delphi) that supports calls to C functions in
DLLs.
Deployment
Platforms
Oracle Express Server is available for Microsoft NT and various Unix plat-
forms, including IBM AIX, Sun Solaris, HP-UX and Digital Unix.
Personal Express is available for Windows 95 and NT.
Data access
ODBC is used to access the data sources, thus any data source for which
there is an ODBC driver can be accessed.
Standards
The published API is Structure n-Dimensional Application Programming
Interface (SNAPI). This is compliant with the OLAP Council’s specification.
Published benchmarks
In May 1998, Oracle published figures for the OLAP Council’s APB-1 OLAP
benchmarks.
Price structure
Oracle Express Service (including Oracle Express Web Agent, Oracle Ex-
press Administrator, Oracle Express Spreadsheet-In and Relational Access
Manager) is licensed on a concurrent basis. Prices start at $4,995 for three
concurrent users. The single user version of the Server, Personal Express, is
$870 per named user.
Oracle Express Objects (including Oracle Express Web Publisher) is priced
at $4,995 per named user, with Oracle Express Analyzer at $745 per named
user.
All prices are Oracle’s standard global prices. Contact your local Oracle office
for local country pricing.
Pilot Software Pilot Decision
Support Suite
Summary
At a glance .............................................................................................. 3
Terminology of the vendor ....................................................................... 4
Ovum’s verdict ......................................................................................... 5
Product overview ..................................................................................... 7
Future enhancements ........................................................................... 15
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 33
Data access .......................................................................................... 33
Standards .............................................................................................. 33
Published benchmarks .......................................................................... 33
Price structure ....................................................................................... 33
Evaluation: Pilot Software – Pilot Decision Support Suite Ovum Evaluates: OLAP
At glance
Developer
Pilot Software, Cambridge, Massachusetts, USA
Versions evaluated
Pilot Decision Support Suite version 6.1
Key points
• A hybrid OLAP server with client-server developer and end-user tools
• Server runs on Windows NT and Unix; clients support Windows 95/98/NT.
Web access is provided
• Pilot has developed analytical applications for retail, CRM and business
performance measurement
Strengths
• A mature MDDB engine that supports a range of analytical functions
• Dynamic dimensions and hierarchies increase the scalability and
flexibility of multidimensional models
• Supports an extensible library of analytical modules for immediate
analysis of data
Points to watch
• Pricing is geared for high-end sites – hybrid OLAP and web access are
expensive options, instead of being bundled with the core product
• Server set-up and management can be complex
• Questions still remain about the company’s stability and growth
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Ovum’s verdict
What we think
Pilot Decision Support Suite (PDSS) is a good choice for companies that
want to perform complex analysis against large multidimensional datasets.
The analytical power is expensive, however, making the product a high-end
solution.
PDSS offers high performance and scalability. The tools are built on a ma-
ture multidimensional database (MDDB) engine and a well-conceived
architecture that gives database administrators (DBAs) considerable flexibil-
ity over where data is stored – in a MDDB or relational database, or combi-
nations of both. Pilot’s dynamic dimensions and hierarchies are unique
features that significantly increase adaptability, enabling end users to define
new member groups in the model on-the-fly without having to rely on DBAs
to redesign the core model. They provide an efficient means of tackling the
requirements of large customer analysis or product management applica-
tions that analyse data at an attribute level.
The prebuilt analytical applets included in the Analysis Library are an
important aid to productivity, allowing for fast and easy analysis of data
from different perspectives. Developers are spared the effort of building
these applications from scratch and have access to a set of customisable
objects as a basis for further development.
PDSS is priced as a high-end tool – the hybrid functionality is considered a
luxury rather than a core part of the product set, and is priced at 50% above
the standard Analysis Server. The set-up and management of the MDDB can
be complex, and the product lacks graphical tools and aids that help to build
and maintain the hybrid system – many functions still rely on Pilot’s com-
mand-line interface. Apart from this shortcoming, there are few reasons to
criticise PDSS in technical terms. The greatest challenge for Pilot is to
improve its commercial standing after some turbulent years. To its credit,
the company has implemented wholesale changes to its sales and marketing
organisation and is targeting new markets via partnerships and the provi-
sion of vertically-focused analytical applications.
When to use
PDSS is most suitable if you:
• are developing medium- to large-scale customer analysis or product
management applications that need attribute-level analysis
• wish to analyse data over a range of time periods, using a variety of
forecasting methods
• want the flexibility to store data in a MDDB, a relational database or
combinations of both
• wish to provide users with out-of-the-box analysis with a minimal
development effort.
Product overview
Components
Pilot Decision Support Suite 6.1 consists of the following components:
• Pilot Analysis Server
• Pilot Designer
• Pilot Desktop
• Pilot Internet Publisher
• Pilot Desktop Reporting
• Pilot Data Mining (Discovery Server).
Figure 1 shows the primary functions of the components and whether they
run on the client or the server.
Pilot Designer
This is a client-server development environment for building analytical
applications. Designer also provides a Visual Basic-like scripting language
for development.
The development environment is closely integrated with the Analysis Server.
Components of the development environment – the table object for building
screens, for example – are ‘dimensionally aware’ of the models that they are
working with.
Pilot Desktop
This is a client-server end-user tool that creates and analyses multidimen-
sional models. Desktop provides a client runtime environment for Designer
applications and includes a standalone version of Analysis Server that data
can be extracted to from a network server.
One aim of Desktop is to provide support for mobile computing.
Desktop integrates three components:
• Model Builder
• Pilot Analysis Library
• Pilot Excel Add-In.
Model Builder
A graphical development tool used to create multidimensional models. Model
Builder can be used to build models in Analysis Server or the PC version of
the OLAP engine.
Pilot Analysis Library
A library of prebuilt ‘analytical applets’ for ad hoc OLAP analysis and more
specialised analysis, such as complex ranking, Pareto analysis, trend-line,
budgeting, exception reporting and forecasting. The applets can be used
across any Pilot model or customised using the Designer component. There
are more than 14 prebuilt analytical applets provided in the Analysis Library.
Pilot Excel Add-In
A DLL add-in that enables Microsoft Excel users to access and analyse
models in Analysis Server. It provides a dimensional selection object, and
drill-down and rotate/pivot facilities. Users can also format and display data
using standard Excel tools.
Architectural options
One of the most important features of PDSS is its flexibility in supporting a
range of OLAP architectures for both client-server and web implementations.
Using PDSS
Hybrid OLAP
PDSS is a hybrid OLAP tool that provides a server-based multidimensional
database as well as a relational storage option. What differentiates PDSS
from other hybrid OLAP tools is how it implements the hybrid system. In
most hybrid systems, the upper level aggregate data is stored in the MDDB;
lower level (or detail) aggregate data is stored in the relational format. When
detail data is requested, the system drills through the MDDB to the rela-
tional database in order to retrieve the data.
PDSS can store data in a multidimensional or relational database by analys-
ing usage. For example, if the upper tier of aggregate data is rarely accessed,
it can then be stored in the relational database with the detail data; mid-
level aggregate data that is frequently accessed can be stored in the MDDB
for quick access. Although Pilot gives administrators considerable flexibility
over where the data is stored, the set-up and management of the servers can
be complex.
Figure 2 is a typical PDSS architecture that shows how different levels of
data can be stored in an MDDB server or a relational database.
Web server
RDBMS
The main advantage for end users is flexibility when working with a busi-
ness model. Users can define their own custom aggregation levels on-the-fly,
without the need to wait for a DBA to redefine the core business model.
Developing applications
PDSS provides Designer, its own integrated development environment,
which offers a number of graphical tools for creating analytical applications
easily. Designer applications are based on the concept of ‘sheets’, which are
composed of ‘objects’. Each object has associated properties defined in a set
of tabbed dialogues.
Designer has a Visual Basic-like scripting language that is used to define
dialogues or fine-tune functionality. Most development is done by tailoring
presupplied objects such as tables, charts, OLE containers or listboxes.
The Object Manager interface gives a hierarchical view of components of a
sheet, as shown in Figure 4. Its aim is to make the task of managing complex
screens easier.
The table object is particularly important, because it provides cross-tabular
functionality for multidimensional analysis and no additional programming
is needed. The table object can be linked to a dimensional model, SQL proce-
dure, text file or DDE source.
When using the table object against a business model, you select a view on
the model and the application runs immediately, providing default cross-
dimensional analysis capabilities. Default features include:
• charting
• the selector object (for choosing measures and dimensions)
• an inherent understanding of drill-down
• access to Pilot’s calendar functions.
The features that are made visible to the end user can be limited by the
developer.
As with prebuilt analytical applets, applications that are created in Designer
are inherently data-driven rather than procedurally defined. This means
that an application should run against any model without change.
Future enhancements
Version 6.2 of PDSS is due in mid-1999. It will include the following en-
hancements:
• model partitioning – allowing data to be stored across separate models
with shared structures. The models can then be treated independently,
and can be loaded and consolidated incrementally by different processes
• the ability to re-use and share dimensional structures and measure
definitions between models
• writeback capabilities via the Excel Add-In facility
• agent-based distribution services – through an ongoing partnership with
Blue Isle Software, Pilot will provide agent-driven analysis and
distribution capabilities. For example, users will be able to create an
agent process to watch for specific criteria (for example, exceptions or
other events in the database), triggering data loads or updates and then
notifying users via e-mail (or other devices such as pagers and mobile
phones)
• extended data access from Internet Publisher – web users will be able to
access MDDB and relational data from the same screen without
additional third-party data access tools.
Tight integration with Thinking Machines’ data mining tool, Darwin, is also
scheduled for 1999. The aim is to have the data mining engine talking to the
Analysis Server, so that results from data mining will feed directly into the
MDDB server model. The integration will provide the focus for developing
new fraud detection applications that extensively integrate data mining
functionality.
Pilot intends to develop additional vertical applications that build OLAP
analysis solutions on top of CRM vendor packages. It is seeking to form
partnerships with campaign management, salesforce automation and call
centre software vendors.
Commercial background
Company background
History and commercial
Pilot Software was incorporated in the US in 1983. It is one of the longest-
established OLAP vendors. Pilot launched one of the first EIS tools (Com-
mand Center) in 1984 and became a leading vendor in the mainframe EIS
market. The company has claimed a number of ‘technology firsts’, including
time-series analysis and the use of a multidimensional database. Pilot
Lightship, first launched in 1992, marked a departure from a mainframe-
centric approach towards a client-server architecture. Lightship has now
been replaced by PDSS.
In 1994, Pilot was bought by Dun & Bradstreet and was incorporated as part
of its Cognizant business division. The takeover provided Pilot with re-
sources to complete its transition to a provider of client-server tools. How-
ever, it did not deliver the growth in revenues that was expected; conflicting
visions between the two management teams resulted in significant problems
with Pilot’s sales and marketing activities. In late 1997, Pilot was sold to
Platinum Equity Holdings, a US company that specialises in buying high-
tech companies. A new business model that focused on channels and partners
was then defined, and unprofitable operations were shut down. Pilot also
restructured its salesforce and management team, and appointed a new
CEO.
Pilot employs around 175 people. Ovum estimates that the company had
revenues of approximately $35 million for the 1998 fiscal year. The compa-
ny’s corporate headquarters is in Cambridge, Massachusetts, USA, with
regional offices and distributors worldwide.
Pilot has more than 500 customers and has sold more than 100,000 user
licences worldwide. Most of the customers are large Global 2,000 companies.
Pilot has a particularly strong presence in the retail, telco and consumer
packaged goods sectors. Large customers include AT&T, Office Max, Burger
King, Kmart, Baskin Robbins, Whirlpool and Lucent Technologies.
The company sells its products mainly through direct channels, but is in-
creasingly using channel partners to penetrate new markets such as cus-
tomer relationship management, manufacturing and healthcare, and bal-
anced scorecard. Major application partners include Foresight Software,
Lightbridge, American Software, Synertech, IMS Health and Touch.
Customer support
Support
Telephone, e-mail and fax support is available worldwide; it is primarily
aimed at developers rather than end users. Support centres are located in
the US, Europe and Australia. Support is priced at around 20% of the licence
fee.
Training
Pilot offers training for all components of PDSS, which is available either on-
site or from its worldwide offices.
Consultancy services
Pilot provides a range of consultancy services for PDSS implementation and
application development. Most consultancy work focuses on defining business
requirements, building models and implementing data access strategies.
Distribution
US
Pilot Software
1 Canal Park
Cambridge, MA 02141
USA
Tel: +1 617 374 9400
Fax: +1 617 374 1110
Europe
Pilot Software
Maxfli Court
Riverside Way
Camberley
Surrey GU15 3YL
UK
Tel: +44 1276 687000
Fax: +44 1276 687077
Asia-Pacific
Pilot Software
Level 1, Building A
Forest Corporate Park
18 Rodborough Road
Frenchs Forest
NSW 2086
Australia
Tel: +61 2 9975 2380
Fax: +61 2 9975 2386
http:// www.pilotsw.com
E-mail: info@pilotsw.com
Product evaluation
1 2 3 4 5 6 7 8 9 10
Users that are familiar with analysis tools will find Desktop easy to navigate,
but OLAP novices may initially feel overwhelmed by the interface. Desktop
comes with a number of point-and-click analytical applets that can be run
immediately against any model. The look-and-feel of the navigation is simi-
lar, regardless of the analysis at hand; once users learn how to navigate one
application, the others will be easy to use. Advanced reporting relies on an
OEM-enabled version of Crystal Reports, but customers must pay extra for this.
The client tools would benefit from more metadata (to help end users under-
stand the model better) and the ability to drill through to source data. Publish-
and-subscribe capabilities for widespread distribution are not supported.
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
Model Builder is a menu-driven GUI tool for building business models and is
part of the Desktop and Designer clients. Model designers can select the
dimensions, hierarchies and measures required in a model through a set of
menus, pick lists and dialogue boxes.
Model Builder is best used to build a standard base model, but lacks ad-
vanced design functions. This can be remedied using the IDQL (Interactive
Dimensional Query Language) command-line interface.
Visualising the data source
It is not directly possible to visualise the data source, but a check-box is
associated with every field listing and allows developers to display values
from the data source during the design process.
Universally available mapping layer
A universally available mapping layer is not supported.
Prompts for metadata
The level of metadata captured during the design process is minimal.
Multiple designers
Multiple designers
Other than locking a model, there is no shared repository to support
multidesigner environments.
Support for versioning
Versioning control is not supported.
1 2 3 4 5 6 7 8 9 10
Pilot’s Analysis Library is the most significant resource for analysis. The
analytical applets fulfil standard and more specific analysis tasks, and
support a wide range of statistical analyses, correlation methods and fore-
casting techniques. The Analysis Library is powerful and easy-to-use.
Most applets can be used out-of-the-box and can be applied to data from any
Pilot model; however, users would benefit from additional help or tutorials to
interpret the results. Developers can also customise the applets (using the
Designer tools) and create new functions using IDQL, the non-procedural
language. Data mining capabilities are also provided.
User-definable extensions
Users can also extend functions and build their own functions using IDQL,
Pilot’s proprietary language. IDQL is a simple non-procedural language
similar in syntax to Pascal. The functions that are created can be stored in a
common repository for re-use.
Data mining
Predictive data mining capabilities are available via Discovery Server;
however, Pilot is in the process of phasing out this application, and now has
an agreement with Thinking Machines to integrate with (and resell) its
Darwin data mining tool. Further information on Darwin can be found at
http://www.think.com.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
The Administrator interface is used to manage models and data. The inter-
face has a command line; administrative functions are accessed by program-
ming directly in IDQL.
Security of models
Three access modes are available for models:
• exclusive – enables only one user to access the model at a time in the
read or write modes
• read – enables all users to access the model in read mode
• shared – enables all users to access the model in the read or write modes.
While one user is using the model in shared mode, no other user can
access it in the exclusive or read modes. Because Analysis Server must
continually check whether the data has changed, the shared mode results
in a much slower response time and is not recommended.
User and/or group access to models can also be limited by dimensions,
aggregation levels or measures. Users are simply provided with filtered
views of the business model and are not aware that they are excluded from
elements of the model.
Query monitoring
The TrackerTable monitors queries that are made to the MDDB and rela-
tional database. The number of hits and the time taken to process a query
based on an intersection of specific model elements is recorded and stored in
a relational table for further analysis and performance tuning.
Management of data
How persistent data is stored (not scored)
Data can be stored in a MDDB, a relational database, or combinations of
both.
Scheduling of loads and updates
Once a model is defined, Model Builder creates a set of procedures that can
be run to update a model. The Analysis Server does not, however, have its
own scheduling services; this has to be done using operating system functions.
Event-driven scheduling
Event-driven scheduling is not supported.
Failed loads and updates
All loads and updates to a model are logged and a trace is provided of any
rejected data.
Distribution of stored data
Data can be stored in Analysis Server’s MDDB or in the relational database
as base level data or in summary tables. This flexibility allows DBAs to
configure where the different aggregate layers in a model are stored. Data
can also be stored locally on the client.
Sparsity (only for persistent models)
Sparsity handling is automatically defined – the MDDB only stores and
indexes data that exists. Analysis Server uses hashing techniques to handle
sparse data.
Methods for managing size
Storage requirements for large models can be controlled through dynamic
dimensions and the ability to selectively cross-dimension measures (for
example, price) may relate dimensionally to product but not to geography.
Dynamic measures, calculated at runtime, can also be defined to save space.
In-memory caching options
In-memory caching options are not supported.
Informing the user when stored data was last uploaded
There is no provision in the default applications for informing end users of
the currency of the data that they are working with. A ‘text’ measure called
‘last_update’ (or similar), with no dimensions associated with it, can be used.
With each load, a string is stored in this measure with the time and date of
the last update (which can subsequently be queried in reports).
Management of users
Multiple users of models with write facilities
Only single-user writeback is supported. When the writeback mode is used,
the entire database is locked and write access is exclusive to a single user.
Management of metadata
Controlling visibility of the ‘roadmap’
In the case of PDSS, the visibility of data is principally governed by the
access rights granted to specific users. Additionally, any controlled view of
data can be defined in IDQL.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
Apart from structural metadata, which remains synchronised in the MDDB
at all times, there is little metadata to synchronise.
Impact analysis
There is no support for impact analysis.
Metadata audit trail (technical and end users)
A metadata audit trail is not supported.
Access to upstream metadata
There is no access to upstream metadata from data warehousing tools.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
Multipass SQL is not supported.
Options for SQL processing
Processing is carried out on the database, whereas advanced and time-based
calculations are carried out on Analysis Server.
Speeding up end-user data access
Server-based caching of frequently-requested data is supported. If the
underlying data is changed, however, then the cache is automatically deleted.
Aggregate navigator
PDSS is aggregate-aware. By using TableTracker and metadata, the SQL
generator is aware of the nearest neighbour for consolidations and uses that
data.
MOLAP
Trading off load time/size and performance
Dynamic dimensions remove the need to reconsolidate the model every time
new data is loaded into the MDDB. There is a trade-off in terms of
performance, however, for models that analyse a large number of dimension
attributes.
Analysis Server provides a graphical interface specifying preconsolidation or
consolidation on-the-fly based on usage statistics returned from the
TableTracker monitoring tool.
Processing
Use of native SQL to speed up data extraction
Native access is only provided for Oracle and Sybase relational databases.
Access to other RDBMSs is via ODBC.
Distribution of processing
PDSS does not support distributed server architectures (for example,
through peer-to-peer processing between Analysis Servers).
Internet Publisher does not support load balancing across multiple servers.
SMP support
There is no support for SMP parallelism.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of a restricted interface
There is no provision for giving users the option of a restricted interface.
Ease of producing EIS-style reports
A portfolio can be defined for presenting a series of EIS-style reports in a
briefing book paradigm. Similarly, a personal page can be created to define
EIS-style web homepages.
Applications
Simple web applications
Web developers can build custom Internet Publisher applications using
HTML, JavaScript, Java, VBScript or ActiveX programs.
Development environment
Designer provides a graphical object-based development environment.
Applications are built from a set of a standard visual objects (such as push
buttons or dialogues) and special dimensionally-aware objects such as tables
Deployment
Platforms
Client
The Desktop and Designer clients run on the Window 95/98/NT formats.
Internet Publisher supports Microsoft Internet Explorer and Netscape web
browsers.
Server
Analysis Server runs on Windows NT and Unix (Solaris, HP-UX, AIX, Digital,
AT&T, Pyramid, NCR and Sequent). AS/400 is supported through an exclu-
sive partnership with SystemSource.
Internet Publisher server runs on Windows NT and Solaris. It uses
Microsoft’s Internet Information Server (IIS) as its web server.
Data access
PDSS provides native access to Oracle and Sybase relational databases.
Other relational sources are accessed via ODBC. It can also access ASCII
files. SDKs are provided to access ERP data sources, such as SAP.
Standards
PDSS does not support Microsoft’s OLE DB for OLAP API; support as a
consumer is planned.
Published benchmarks
Pilot has not participated in, or published any, OLAP benchmarks.
Price structure
Pricing for Analysis Server starts at $25,000 for up to ten users for Windows
NT and Unix platforms. The hybrid Analysis Server OLAP option is an
additional 50% of the server price.
The Internet Publisher prices start at $10,000 for up to ten users. Internet
users are charged $50 each; Desktop users are charged $895 each (which
includes the Analysis Library). A Designer developer licence costs $4,000.
The Reporting module costs $9,995.
The price of the packaged applications starts at $12,500 for ten users; please
note that these applications also need Analysis Server and Desktop licences.
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 6
Future enhancements ........................................................................... 14
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 28
Data access .......................................................................................... 28
Standards .............................................................................................. 28
Published benchmarks .......................................................................... 28
Price structure ....................................................................................... 28
Evaluation: SAP AG – SAP Business Information Warehouse Ovum Evaluates: OLAP
At a glance
Developer
SAP AG, Walldorf, Germany
Version evaluated
SAP Business Information Warehouse (BW), version 1.2A
Key facts
• A data warehouse that is preconfigured to work with SAP R/3 data
• Server runs on Windows NT and Unix; clients run on Windows 95 and
Windows NT and use Microsoft Excel as a presentation layer
• BW is an independent product and has a separate release cycle from SAP
R/3
Strengths
• SAP delivers preconfigured business content and plans to add more in
future versions
• Preconfigured multidimensional models, extraction routines and reports
make initial implementation quick and easy if solely SAP data is being
used
• Central and easy to use tools for administering the whole data warehouse
Points to watch
• Limited range of end-user tools – no web access to models
• Building models from non-SAP data relies on R/3 expertise and the use of
third-party tools
• Requires SAP skills to set up, enhance and manage
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Ovum’s verdict
What we think
SAP’s Business Information Warehouse (BW) comes fully equipped for SAP
infrastructures and will therefore slot smoothly into any R/3 environment. If
the preconfigured data extraction routines, multidimensional models and
reports that come with BW map closely to your organisation’s business
intelligence needs, then implementation can be relatively quick and easy. If
there is also a preference to ‘buy from a single vendor’, then BW will be a
compelling product for users wanting OLAP access to large amounts of SAP
data.
The tight integration with the R/3 OLTP modules and preconfigured busi-
ness content are undoubtedly the greatest strengths of the product, and
should allow SAP to carve itself a niche for quick turnkey implementations.
However, the downside is that it can best be used and extended by develop-
ers with strong SAP skills. Business users are therefore highly dependent on
IS to set up and supply them with specialised models. BW is primarily
geared to use R/3 data and SAP has yet to prove that it is easy to integrate
with non-SAP data sources and end-user tools; this relies on proprietary
BAPIs, expensive R/3 skills and third-party applications and tools.
SAP does not have great OLAP experience and BW, in the first release at
least, falls short of providing the advanced modelling, analytical and report-
ing capabilities found in more mature OLAP tools. Nor is it clear whether
SAP can rollout functionality fast enough to satisfy complex user needs.
Although the Excel client is sufficient for standard OLAP, BW is hindered by
its lack of front-end tools. The market is flooded with OLAP clients that
provide more flexible and powerful front ends.
BW comes with a degree of openness not usually associated with SAP.
However, SAP has not yet published the specifications for the BAPI that
OLAP tool vendors would use to get access to BW data. Full support for the
OLE DB for OLAP interface is also lacking.
When to use
SAP BW is most suitable if you:
• have most of your corporate data in R/3, and need direct analysis of data
in the transaction databases for decision support
• can closely map BW’s preconfigured models and extraction routines
against your organisation’s business intelligence needs
• want to buy a turnkey data warehouse package from a single, well-known
vendor, rather than build your own
• have SAP development skills in your organisation.
Product overview
Components
The key OLAP components of BW version 1.2A are:
• BW Server
• Administrator Workbench
• Business Explorer
• Data Extractors (for R/3)
• BAPIs (Business APIs).
Figure 1 shows the primary functions of the components and how they relate
to client-server systems.
BW Server
A mid-tier server that includes an OLAP engine, a metadata repository and
a database – all of which are preconfigured for R/3. The server processes all
OLAP requests and returns results data to clients.
InfoCubes
The BW database is structured into self-contained multidimensional data
‘containers’, called InfoCubes. An InfoCube is stored in a number of rela-
tional tables in a star schema. The database can reside within BW Server or
on a remote database server. SAP provides more than 20 preconfigured
InfoCubes. Users can also extend existing InfoCubes and create additional
ones.
InfoCubes contain InfoObjects (dimensions and measures). They are fed
from InfoSources that extract data from R/3 systems or external systems
such as relational data warehouses, flat files or other source systems.
OLAP Processor
An OLAP engine that is used for processing data in InfoCubes. It provides
the methods needed to query data and perform OLAP analysis.
Staging Engine
The Staging Engine requests an extract from an InfoSource and performs
the necessary mappings and transformations needed to create InfoCubes. It
uses SAP’s ALE (Application Link Embedding) middleware for data
transport.
Metadata repository
The metadata repository stores business-related and technical metadata in
catalogues. ‘Business metadata’ includes content definitions, descriptions
and rules. ‘Technical metadata’ describes structures and transformation and
mapping rules for the data extraction and staging process.
Operational Data Store
An optional component that temporarily stores transactional data in BW.
The data format remains unchanged; no aggregations or transformations
take place. The Operational Data Store (ODS) is organised as a set of flat
tables, each assigned to an InfoSource. The ODS is primarily used as an
intermediate store for the staging process, allowing custom data scrubbing
and transformation tasks to be performed (using either SAP or third-party
tools) on a complete extract before it is mapped to InfoCubes. It also provides
a method for end users to drill-down to transaction-level data without
entering the OLTP system.
Administrator Workbench
An administration tool for managing and extending the data warehouse
environment. It provides a graphical interface for scheduling data loads/
updates and monitoring processing tasks. Graphical tools are also provided
for defining and maintaining InfoCubes, InfoSources, metadata, setting
security and maintaining a report catalogue.
Business Explorer
The client component for BW. It consists of two parts:
• Report Browser – a web interface that enables the end user to display
metadata information about reports, and choose what models to explore
• Analyzer – an ad hoc query and analysis interface that uses Microsoft
Excel to display data. A BW add-on provides OLAP capabilities directly
from the spreadsheet.
The report catalogue can be consulted via the Internet using a web browser.
If the user wishes to interact with the data, Business Explorer fires up
Analyzer, which is really Excel with BW extensions. BW’s OLAP engine is
only activated when the data needs to be refreshed or a new view of the data
needs to be computed.
Data Extractors
A set of programs for the extraction of transaction data from R/3 OLTP
applications into BW. BW provides extract programs for all the major R/3
applications, including Logistics, Controlling, Finance and HR (human
resources). Tools are provided to extend the extractor routines. Initially, the
extractor programs pull the entire dataset across; on subsequent extractions,
they pull only incremental changes.
BAPIs
BAPIs (Business APIs) are proprietary programming interfaces for business
applications, which SAP promises will remain stable. There are more than
400 provided by SAP. BAPIs are published at www.sap.com/bapi.
BW supports BAPIs for loading data from non-SAP data sources into BW
and integrating with third-party applications. SAP has four certified BAPI
partners for data extraction – ETI, Informatica, Prism and TSI – and is
working closely with a number of front-end OLAP tool vendors for integra-
tion.
Programmers can include BAPIs in programming languages such as Visual
Basic, Java and C, as well as SAP’s own development language – ABAP/4.
Architectural options
Full mid-tier architecture
SAP BW does not support a full mid-tier MDDB architecture.
Using SAP BW
Business content
A key feature of the BW philosophy is that the information is organised into
meaningful ‘business content’ – a term that SAP uses to describe
preconfigured storage, presentation and data extraction objects that are
designed with business needs in mind. The objects provided by BW are based
on business processes that are executed in the R/3 system, of which there
are more than 900.
Business Explorer
Favourites
Catalogue browser
Reporting and
Report builder
analysis for Excel
BAPI
Administration Business information
workbench warehouse server
OLAP processor
Administration
Staging Operational
Monitor
engine datastore
BAPI
• Data Extractors – data extractor programs are supplied for all the major
R/3 modules, including Finance, Controlling, Logistics and Human
Resources. SAP also provides customised routines to access data in older
R/2 systems or proprietary file structures
• report templates – BW includes a range of predefined reporting
templates for particular user types, such as production planners,
financial controllers, product managers and human resources managers.
Task-related reports (or queries) combine information from related
InfoCubes and organise them into report clusters which, in turn, form the
basis for a channel, in which a so-called ‘business role’ is defined.
Templates are also available for commonly required business parameters,
such as contribution margin.
Metadata repository
BW has a central metadata repository that contains information about both
the meaning of BW data, and its origins and transformations. The metadata
repository is preconfigured for R/3 and is dynamically linked to the enter-
prise data model. All developer activities are automatically captured in the
repository.
The repository organises the information into four catalogues:
• InfoObject catalogue – all the attributes and measures are described.
These InfoObjects can be re-used within multiple InfoSources and
InfoCubes
• InfoCube catalogue – stores the definitions of InfoCubes (the attributes
and measures contained within each InfoCube)
• Report catalogue – contains report descriptions and definitions. Using
Business Explorer the end user can view these and select reports to open
• InfoSource catalogue – as well as the InfoSource definitions, the
catalogue stores information about the mappings on to InfoCubes.
The Administrator Workbench provides a metadata management tool
(Metadata Manager) for maintaining the different catalogues.
Future enhancements
SAP is developing BW and its partnerships. The next planned release is
version 1.2B, due in the first quarter of 1999. The release will mainly include
additional business content rather than major technological developments.
OLAP-specific enhancements will include drill-through to transaction-level
data in OLTP and databases directly from the Business Explorer interface,
archiving and data replication, server load balancing and capacity planning
functions, and enhanced data visualisation capabilities.
SAP plans to extend the range of servers supported to include MVS in early
1999, and AS400 at a future (as yet unspecified) date.
The next significant release (version 2.0) is planned for the third quarter of
1999 and will include technology and content improvements.
SAP plans to provide BAPIs for accessing data in BW, but these are not due
for release until the end of 1999.
Commercial background
Company background
History and commercial
SAP (Systems, Applications and Products in Data Processing) is the fourth-
largest independent software company in the world. The company was
founded in 1972 in Mannheim, Germany, by five ex-IBM software engineers.
SAP has not changed direction radically over the years and has been suc-
cessful in extending its market through renewing and expanding its core
business application software offering. SAP’s first major product was its
mainframe R/2 manufacturing solution. In 1991, SAP released R/3, the first
fully configurable client-server ERP system available on Unix. R/3’s business
process orientation, which permitted views of management accounts based
on multiple business views, was very much in tune with the business process
re-engineering movement at the time. This has allowed SAP to successfully
pilot R/3 to a leading position in the US and Europe. Ovum’s figures indicate
around a 40% share of the software licence revenues for the worldwide ERP
market, with more than 17,000 R/3 installations worldwide. SAP BW was
first released in September 1998, although the pilot programme started in
May 1997. By the end of 1998, SAP claimed around 300 shipments of the
product. As expected, almost all of these were to current SAP R/3 users.
SAP is publicly held and has its headquarters in Walldorf, Germany, with
offices worldwide. The company is structured around 17 industry-specific
business units and employs more than 17,000 people. SAP is listed on sev-
eral exchanges, including the Frankfurt stock exchange, the Swiss stock
exchange and the New York stock exchange. Figures for fiscal 1998 show
that revenues grew 40% to DM8.4 billion ($4.8 billion). However, pretax
profit growth of 15% was well below expectations. This shortfall was attrib-
uted to the weak Asian market and the decline in SAP’s Japanese activities.
Customer support
Support
SAP offers the same level of support for BW and R/3. This includes around-
the-clock telephone hotline and on and offsite support worldwide.
Training
BW training is available from most of SAP’s subsidiaries worldwide. SAP
has only fully translated training materials from German into English.
Consultancy services
SAP provides consulting services for BW for implementation, application
creation and use. Consulting accounts for around 20% of SAP’s revenues and
is growing faster than software licence revenues. SAP also has partnerships
with global management consultants and system integrators.
Distribution
Europe
SAP
Neurottstrasse 16
69190 Walldorf
Germany
Tel: +49 6227 747 474
Fax: +49 6227 757 575
Americas
SAP America
701 Lee Road
Wayne, PA 19087
USA
Tel: +1 610 725 4500
Fax: +1 610 725 4555
Asia-Pacific
SAP Asia
750A Chai Chee Road
7th Floor Chai Chee
Industrial Park
Singapore 469001
Tel: +65 446 1800
Fax: +65 249 1818
E-mail: info@sap.com
http://www.sap.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
Business Explorer uses Microsoft Excel (with special BW extensions) for the
analysis and presentation of data. The interface is easy to use and flexible
enough to support a significant degree of ad hoc query and analysis, but
lacks the advanced functionality found in more mature OLAP client tools.
Preconfigured report templates are provided to quickly build up a catalogue
of reports that can be consulted via a web browser. There is no integration
with third-party OLAP front ends, and support for the targeted distribution
of reports is limited.
1 2 3 4 5 6 7 8 9 10
SAP provides preconfigured models that are designed for analysing R/3
data. New models based on previously defined dimensions and measures can
be easily built in the Administrator Workbench. If there is a need to integrate
non-SAP data in models, then new InfoSources will have to be defined. The
InfoSources provided by BW are intended to be used as they are; creating new
ones requires designers to have strong SAP development skills and use third-
party tools.
Basic design
Design interface
InfoCubes can be easily designed using the menu-driven tools provided by
Administrator Workbench. If predefined dimensions and measures are
available then this is a straightforward process. Otherwise, developers will
need to use additional SAP and third-party tools to define upstream integra-
tion with data sources.
Visualising the data source
The data source is only visualised upstream, when defining the InfoSources.
It is not possible to bring up a sample of data when creating InfoCubes.
Universally available mapping layer
Dimensions and measures defined in BW’s metadata layer can be re-used
across models. However, a universal mapping layer is not directly supported.
Prompts for metadata
Designers are prompted to include descriptive metadata, such as owner
details, contact details, description and rationale when building InfoCubes.
Multiple designers
Multiple designers
BW does not provide any special support for multi-designer environments.
However, developers can use the R/3 Basis system to provide change man-
agement, check-out/in and multi-user locking facilities for the repository.
Support for versioning
Version management is provided by CTO.
1 2 3 4 5 6 7 8 9 10
Statistical models
Support is provided for a number of statistical functions, including
percentage share, ratio, correlation, percentage difference and variance.
Trend analysis
Average over period sum/count functions can be used for establishing trends.
Otherwise, BW relies on Excel for trend analysis functions.
Simple regression
BW relies on the linear regression functions provided by Excel.
Time-series forecasting
Simple time-related comparisons and trends are supported via Excel.
However, there is no support for advanced time-series forecasting methods.
User-definable extensions
Simple functions can be defined locally in a model using the formula editor
or the Excel macro functions. However, a procedural language is not
supported.
Data mining
There is no support for data mining.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Business Explorer offers a web interface, but only for browsing the metadata
catalogue to see what reports are available. Ad hoc query and analysis is via
Analyzer and requires a copy of Excel (with a BW add-on) to be installed
locally on the client machine. Effectively, there is no OLAP support via the
Web. However, if required, it can be provided by tools from some of SAP’s
partners.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
The Administrator Workbench provides a graphical overview of InfoObjects
and corresponding source systems, InfoSources and InfoCubes. These objects
can be easily managed using drag-and-drop functions.
Security of models
Access rights (authorisations) can be defined for queries, models or indi-
vidual InfoObjects. Security can be modelled freely for all elements in a
model, right down to field values.
Query monitoring
The ‘monitor’ facility provides detailed statistics on the frequency of query
execution and usage of summary levels. BW supports its own statistics
InfoCube for analysing and reporting on the collected data.
Management of data
How persistent data is stored (not scored)
Data is stored persistently in relational database (in a star schema). The
database can reside in the BW Server or a separate database server.
Scheduling of loads/updates
SAP BW’s Scheduler facility is based on R/3’s scheduling system. It provides
a graphical interface that is used to define extract and load schedules ac-
cording to time and date criteria (hourly, daily, weekly, monthly or other
periods). Full or delta loads/updates are supported.
Event-driven scheduling
It is possible to schedule a data load or an update to the data warehouse
based on external events (for example, a specific transaction in R/3).
Failed loads/updates
The ‘monitor’ facility supervises the load and staging processes. It provides
detailed statistics on current and completed load jobs and notifies the ad-
ministrator of exceptions such as failures. An ‘assistant’ feature helps with
the analysis of failed and incorrect data loads/updates.
Distribution of stored data
Data can be stored in separate BW Servers or remote database servers. No
data is stored on the client.
Sparsity (only for persistent models)
Sparse data handling is the responsibility of the relational database, and is
not an issue in BW.
Methods for managing size
Typically, the size of InfoCubes runs into tens of gigabytes, rather than
hundreds. However, the maximum size is limited only by the RDBMS. BW
administrators can decide on the number of aggregate tables created to
control the size of the database.
In-memory caching options
R/3’s extensive memory management and caching functions are available for
fine-tuning performance.
Informing the user when stored data was last uploaded
All reports are time-stamped for each new data load.
Management of users
Multiple users of models with write facilities
Write-back to InfoCubes is not supported. However, certain SAP applications
that work with BW do allow write-back to the source operational systems.
User security profiles
User security is based on individual and group authorisation profiles. BW
uses the same authorisation schema used in R/3.
Query governance
SAP BW does not support query governance.
Management of metadata
Controlling visibility of the ‘roadmap’
SAP authorisation checks can be used to restrict access to any functions or
objects in the SAP BW environment. User authorisations are summarised in
the form of profiles. Different authorisations are required for working with
the Administrator Workbench and the Business Explorer.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Dimensions and measures can easily be added to models. All these defini-
tions are stored and maintained in a single metadata repository, allowing for
easy re-use within multiple models. If SAP data is being used, structural
changes to the data source are automatically synchronised within BW. How-
ever, if customised ABAP programs are used for accessing non-SAP data, the
maintenance overheads can rise considerably.
Metadata
Synchronising model and model metadata
InfoCubes are automatically synchronised with the metadata repository.
However, descriptive metadata assigned to InfoCubes must be maintained
manually.
Impact analysis
There is no direct support for anticipating the consequences of changes in
data sources to models and reports.
Metadata audit trail (technical and end users)
The metadata repository provides a history of the metadata.
Access to upstream metadata
Metadata from a R/3 system can be imported into BW. Access to metadata
from data warehousing tools is via BAPIs. SAP has four certified partners in
this space: ETI, Informatica, Prism and TSI.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
BW resides on its own dedicated server and is separate from the OLTP
system and other source systems. Scalability is achieved using multipass
SQL, distributed processing and SMP technology. The caching mechanisms
within the tool have also been carefully designed to maintain performance.
Overall, getting the best performance from BW can be expensive, although
justifiable.
ROLAP
Multipass SQL
BW automatically generates multipass SQL.
Options for SQL processing
Data processing is optimised between the OLAP Processor and the BW
database.
MOLAP
BW is not a MOLAP tool.
Processing
Use of native SQL to speed up data extraction
BW supports native SQL generation for accessing its relational database.
Distribution of processing
Multiple BW Servers can be connected. However, there is no support for
intelligently balancing processing across the servers.
SMP support
SAP BW supports SMP parallelism.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of a restricted interface
There are no facilities for providing restricted interfaces.
Ease of producing EIS-style reports
There is no direct support for producing EIS-style reports. This can only be
achieved using third-party development tools.
Applications
Simple web applications
There is no direct support for developing web applications.
Development environment
R/3 customers can use SAP’s proprietary ABAP/4 fourth generation pro-
gramming language. SAP provides ABAP Workbench with the SAP R/3
system, which is a development platform for client-server applications. It
includes a repository, editor, dictionary, function builder, screen/menu paint-
ers and tools for testing and debugging R/3 applications. ABAP can be used
to communicate with both the application server layer and the client. All BW
objects are accessible to the ABAP Workbench.
Use of third-party development tools
BAPIs can be accessed from development environments such as Visual
Basic, Visual J++ and Visual Age.
Deployment
Platforms
Client
SAP BW Business Explorer runs on Windows 95 and Windows NT. A local
copy of Microsoft Excel is required on the client to run Analyzer.
Server
SAP BW Servers and database servers run on Windows NT and all the
major versions of Unix.
Data access
SAP BW is primarily designed to work with SAP R/3 data or it can easily
access data in other BW systems. SAP can provide customised routines to
enable customers to access data in R/2 systems and from content providers
such as Dun and Bradstreet. There is a flat file load interface for feeding in
flat files. Data from other non-SAP sources is loaded using BAPIs. This can
be achieved either by users writing applications or through the use of SAP’s
partners that have been certified, such as ETI, Informatica, Prism and TSI.
The first three of these are evaluated in Ovum Evaluates: Data Warehousing
Tools.
Standards
SAP BW supports SAP’s proprietary BAPIs to feed data into BW.
For accessing data in BW, SAP provides a subset of Microsoft’s OLE DB for
OLAP protocol for accessing data. This is used by Microsoft Excel when
running as Business Analyzer.
Published benchmarks
SAP BW does not have any published OLAP benchmarks.
Price structure
Pricing for SAP BW is based on named users. The entry level cost for BW
Server is DM250,000 (approximately $144,200) for 250 users. Existing SAP
users can upgrade to SAP BW for DM1,000 ($575).
Summary
At a glance .............................................................................................. 2
Terminology of the vendor ....................................................................... 3
Ovum’s verdict ......................................................................................... 4
Product overview ..................................................................................... 5
Future enhancements ........................................................................... 13
Commercial background
Product evaluation
Deployment
Platforms ............................................................................................... 28
Data access .......................................................................................... 28
Standards .............................................................................................. 28
Published benchmarks .......................................................................... 28
Price structure ....................................................................................... 28
At a glance
Developer
Seagate Software Information Management Group, Scotts Valley, CA, USA
Versions evaluated
Seagate Holos version 7.0
Key facts
• A client-server application development environment for building OLAP
applications
• Runs on Windows NT, Unix and VMS servers; clients run on Windows 3.1,
Windows 95, Windows NT and Macintosh
• Part of a comprehensive suite of business intelligence applications being
assembled by Seagate Software
Strengths
• A functionally rich OLAP tool with strong support for custom application
development
• Supports a range of OLAP architectures, and provides flexible data
storage and access options
• Supports an extensive set of advanced analytical functions
Points to watch
• Little ‘out-of-the-box’ functionality
• Can be complex to set-up and manage – considerable thought has to go
into the model and application design process
• End users rely entirely on IS to build models and OLAP applications
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Terminology of the vendor
Agent
Used by Holos to perform time-consuming, complex or repetitive tasks in the
background, as batch programs.
Holos desktop
A custom end-user interface built using the Holos development tools. Usu-
ally called a Holos application.
Model
A collective term for a combination of structures that store data and rules
which define calculations to be performed on the data.
Open OLAP
An integration strategy that allows Holos to incorporate data stored in
third-party multidimensional databases into Holos structures.
Report
A common way of presenting information to the user of a Holos application.
A report is a view from a particular perspective into a multidimensional
model. It can also contain other types of information such as graphs, images
and OLE objects and controls.
Rules
Holos rules define relationships between dimension members and calculated
measures. An example of a rule could be a calculation such as variance.
Rules are generally held in a ruletable.
Structures
Define dimensions and their relationships and include information on how
the underlying data is stored and retrieved. Holos supports four types of
structure: memory-based, disk-based, relational and ‘Open OLAP’. A struc-
ture corresponds to Ovum’s definition of a business model.
Ovum’s verdict
What we think
Seagate Holos is aimed primarily at application developers. Holos’s powerful
4GL is purpose-designed for building OLAP applications and provides a
broad range of development options. The graphical development tools offer
some strong features for building sophisticated OLAP applications for
specialist analytical requirements.
One of Holos’s greatest strengths is its breadth of functionality. The tool
supports a wide range of OLAP architectures, and provides one of the
strongest sets of advanced statistical analysis and forecasting functions in
the OLAP market. Scalable performance is also achieved through the use of
optimised multi-cubes and distributed and parallel processing. However,
under the covers, Holos is a complex OLAP product, and its learning curve is
steeper than most other OLAP products. The compound multi-cube architec-
ture can be complex to set up and there are limited graphical management
facilities to maintain the system. The Holos Web Gateway is Seagate Soft-
ware’s first attempt at web access. Although it provides a functional web
interface, its HTML interface is far from elegant.
Holos is targeted at the high-end of the OLAP market, which consists of
large corporates seeking to deploy OLAP throughout the enterprise. Any
purchase decision therefore requires a strong commitment to the Holos
development philosophy, not to mention a significant entry-level investment.
The main challenge for Seagate Software is to straddle the requirements of
proprietary language-based development, and the provision of ‘ready-to-use’
tools that provide greater freedom for end users and remove the reliance on
developers and IS to deliver OLAP applications.
When to use
Holos is suitable if you:
• are willing to make a strategic commitment to Holos for decision support
• require highly customised OLAP applications
• have in-house development skills to exploit
• want applications with advanced statistical analysis and forecasting
• require flexible models built from disparate data sources.
It is less suitable if you:
• are looking for an ‘out-of-the-box’ OLAP solution that is quick and easy to
implement
• have users that require flexible, ad hoc business modelling
• do not wish to commit yourself to a significant development effort
• do not want IS involvement in supporting OLAP applications.
Product overview
Components
The main components of Holos 7.0 are:
• Holos development environment
• Holos Agents
• Seagate Worksheet
• Holos Web Gateway.
Figure 1 shows the primary functions of Holos components and whether
they run on the client or the server.
Holos is a client-server application development tool for building OLAP
applications. The tool is based on a powerful 4GL (called Holos Language),
which is purpose-built for developing business intelligence applications;
application components are created as Holos Language scripts.
Holos is a flexible hybrid OLAP product that supports relational and multi-
dimensional data storage options, with an interesting overlaying option.
OLAP processing and code development is usually completed on the server.
Holos extensively uses agents to automate a number of processing tasks. At
the client end, Holos relies mainly on the development of custom applica-
tions to access data and perform OLAP analysis. An ‘out-of-the-box’
spreadsheet-style interface is provided. Seagate Crystal Reports and
Seagate Info support read-only access to Holos. Holos can also be accessed by
ODBC-compliant applications. The Holos Web Gateway provides access to
the Holos server via a web browser; it supports HTML and Java tools.
Server Holos development Holos development Holos Agents Holos Web Gateway
environment environment
Data Manager
This provides a GUI point-and-click interface for creating Holos data struc-
tures. Structures define dimensions and their relationships. They also
include information on how the underlying data is stored and retrieved. A
Holos ‘model’ is a combination of a structure and a collection of business
rules that define further relationships between dimensions and additional
calculated measures.
The Data Manager is also used to define the loading of data into the struc-
tures from relational databases, flat files and other supported data sources.
Hierarchy Manager
Used to define simple parent/child relationships between dimension mem-
bers. It is also used to define additional drill hierarchies using a graphical
editor.
Data Filter
Used to define selection criteria for use with Holos reports, in the Worksheet
front-end tool or as standalone processes.
Report Designer
A tool for designing multidimensional reports, and enabling OLAP functions
to be defined within them.
Desktop Designer
Used to design custom application interfaces and desktops. It integrates
reports and application components in a single user interface. Developers
can also use the tool to tailor their own development workspaces.
Dialogue Designer
Used to create Visual Basic-style client dialogues for use in applications.
Seagate Worksheet
An end-user tool that presents a spreadsheet-style interface for ad hoc
analysis of models. The Worksheet is designed for power users that require
flexible ad hoc OLAP capabilities. The Worksheet interfaces directly with
models stored on the server. It can access all the analytical and data ma-
nipulation functions provided by Holos. A Java version is also available.
Architectural options
The Holos development environment supports a wide range of application
development styles and implementation architectures. This flexibility is
possible due to the number of data storage and processing options that can
be configured by developers when designing OLAP applications.
Pro
fits rs les
Co s
ts Ca cks cyc
Sa l
es Tru o tor
M
Northern
Southern
Eastern
Apr
Mar
Feb
Jan
Stacked
Racked
Northern
Southern
Eastern
Racking occurs when multiple cubes are joined in parallel along a ‘backbone’
dimension. For example, three cubes containing sales data dimensioned by
region, product and line item can each hold data for different months. These
three cubes can be ‘racked’ together by introducing a time dimension to act
as a backbone – each of the monthly structures would contribute the data for
one field in the time dimension. The resulting compound cube behaves like a
standard cube dimensioned by region, product, line item and time, but would
contain no data of its own; instead it would provide pointers to data in the
base cubes.
Stacking is a less obvious form of joining. It connects two identically dimen-
sioned structures in a series – in this case, data is always read preferentially
from the first structure. When data is read from the resulting compound
cube, Holos first interrogates the top cube of the stack. If the data is found, it
is returned: if not, the bottom cube is interrogated. In contrast, when data is
written to the compound cube, it is only ever written to the top cube.
Full mid-tier architecture
This is the ‘natural’ architectural configuration for Holos where data re-
trieval, manipulation and formatting are done on the server, with the client
interface handling user requests and presentation issues.
The full mid-tier architecture stores models and data in a mid-tier server.
OLAP calculations and processing are done on the server. Holos disk struc-
tures represent the MDDB, although an open OLAP structure could refer-
ence an Essbase or Microsoft SQL Server OLAP Services multidimensional
database. Typically, the MDDB is built using multiple structures (multi-
cubes) that can be linked together to create a ‘virtual’ structure.
The client interface can be an application created using the Holos develop-
ment tools, the Seagate Worksheet, an Excel add-in or a web browser (if the
Holos Web Gateway is also running on the server).
Desktop architecture
A cut-down, standalone version of Holos is provided to support a two-tier
desktop architecture. This can be configured to work directly against an
RDBMS and carry out processing on the client machine.
Mobile architecture
The standalone version allows Holos applications to be run on a laptop
computer. A structure can be downloaded locally for offline analysis.
Using Holos
Holos is primarily an application development environment. As such, it
clearly defines roles and tools for developers and end users:
• the development environment and design tools provided are geared
towards experienced application developers with strong programming
skills. Typically, developers create the business model and the application
interface, including OLAP functions and access rights
• end users access models assigned to them via a custom OLAP interface,
or by using the Seagate Worksheet interface.
Principal development concepts
It is easier to understand the Holos approach to OLAP application develop-
ment if three important concepts – structures, rules and models – are clari-
fied.
• A structure is both a storage type and a definition of a set of dimension
and hierarchies. The structure is, in essence, a metadata layer that maps
dimensions, dimensions members and aggregation hierarchies on to a
physical storage format. Structures can be implemented in a number of
different ways. Different structures can be linked within a compound
structure to appear as a single structure.
• Holos rules define relationships between fields (dimension members and
measures). A rule may, for example, define a calculated measure such as
variance. A rule may also define the derivation rules for a hierarchical
relationship between members. For easier management, a set of rules can
be held together in a ruletable.
• A model is a collective term for a combination of structures (which store
data) and rules (which define calculations to be performed on that data).
Models are defined by declaring their constituent objects (structures and
rules). Holos provides automatic validation of models to ensure rules are
correctly applied. It also ensures that rules are always implemented in
the correct order (so that any dependencies between calculations are
handled correctly). A model can have a number of structures and
ruletables, and a structure can be used with different models.
A worksheet view can also be saved to provide the basis of a report defini-
tion, which is tailored in the Report Designer tool.
Future enhancements
Seagate Software plans a number of enhancements to Holos with two major
releases.
• The ‘Tasman’ release, due in early 1999, will provide a number of
significant enhancements to Holos’s web offerings and its development
environment.
• The ‘Magellan’ release is scheduled for the second half of 1999 and will
focus on making Holos structures more scalable. It will also provide
greater management capabilities, specifically a toolkit for implementing
application and user security and a GUI tool for administering multiple
OLAP servers.
Seagate Software is also endeavouring to integrate its OLAP products more
closely to provide a business intelligence suite. The next version of Seagate
Info, code-named Polaris and scheduled for release in December 1998, will
allow Holos applications to be run directly from its interface.
Commercial background
Company background
History and commercial
The original developer of Holos was Holistic Systems, a British company
formed in 1987. In June 1996, Seagate Technology purchased Holistic Sys-
tems for $84 million as part of its strategy to build up a $1 billion software
arm.
Seagate Technology is perhaps best known as a manufacturer of disk drives,
but has been building up its software business. It acquired Crystal Services
in 1994, which developed the now ubiquitous Crystal Reports. This was
followed by Holistic Systems and several network storage management
software products. Together, these companies now form Seagate Software, a
wholly-owned subsidiary of Seagate Technology. The business intelligence
technologies have now been brought under the wing of Seagate Software’s
IMG.
Seagate Technology is a publicly held company with revenues of $6.8 billion.
Seagate Software’s revenues for fiscal 1998 grew 35% to $293 million.
Seagate Software is based in Scotts Valley, California, US, and employs 1,800
people. It also has representation in 40 countries worldwide.
Customer support
Support
Help-desk support is provided through all local offices via the telephone or
the Web.
Training
A range of Holos courses are provided for developers and end users. Develop-
ers may need some consulting support in addition to training to establish
first links between Holos structures and the underlying database. Computer-
based training is also available.
Consultancy services
The worldwide professional services division is growing rapidly. Services
provided can be basic (installation or start-up) or can scale up to address
enterprise issues such as requirements analysis and application design.
Seagate also has referral partnerships with management consultants.
Distribution
North America
Seagate Software
Information Management Group
840 Cambie Street
Vancouver
British Columbia V6B 4J2
Canada
Tel: +1 604 681 34 35
Fax: +1 604 681 29 34
Europe, Middle East and Africa
Seagate Software
The Broadwalk
54 The Broadway
Ealing
London W5 5JN
UK
Tel: +44 181 566 2330
Fax: +44 181 231 0600
Asia-Pacific
Seagate Software IMG Australia
Level 9, 42 Alfred Street
Milsons Point
New South Wales
Australia
Tel: +61 2 9955 4088
Fax: +61 2 9955 7682
http://www.seagatesoftware.com
E-mail: info@seagatesoftware.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
Core OLAP analysis is well supported via the Seagate Worksheet or custom
application interfaces. The Worksheet is well suited to power users wanting to
take full advantage of Holos’s advanced forecasting and modelling capabili-
ties from a ‘no-nonsense’ spreadsheet-like interface. However, Holos is a tool
for IS developers building applications for diverse user requirements. Holos
supports report distribution via the Web or by integrating with reporting
tools such as Seagate Info. Holos data can also be exported to Lotus Notes
databases as a shared resource, for distribution to group working environ-
ments. Report subscription services are not provided.
1 2 3 4 5 6 7 8 9 10
Basic design
Design interface
The Holos tools provide an easy-to-use point-and-click interface for creating
structures, dimensional hierarchies and calculations. However, there is no
single view of the various structures and dimensions that are available for
use in applications.
Visualising the data source
When building the structure, the database, flat file or source schema can be
viewed on-screen, and a sample of data displayed.
Universally available mapping layer
There is no direct support for providing end users with a universal mapping
layer.
Prompts for metadata
Developers and end users are not automatically prompted to provide addi-
tional metadata during the model design, application development or report
design process. Optional metadata can be included. The metadata can be as
detailed as required, and can be stored with the object concerned or linked to
a database table.
Multiple designers
Multiple designers
Holos supports a built-in file control system that provides standard check-
out/check-in facilities.
Support for versioning
Holos supports its own version control system, and can also link to external
versioning and change management systems.
1 2 3 4 5 6 7 8 9 10
User-definable extensions
The Holos external function interface can be used to access procedural
analytical functions created using external tools; for example, SAS. The
Holos language can also be used to extend the analytical capability of
applications.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Holos supports both HTML and Java-based web interfaces. The HTML
implementation is far from elegant, but does provide web users with a simple
and effective means of accessing and navigating through reports, albeit in a
restricted manner. The Java-implementation provides a more flexible inter-
face for OLAP analysis. The web tools are aimed at end users; there is no
support for designing models or developing applications.
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
Application Manager is a general interface that is used to manage all as-
pects of the application development environment, including models.
Security of models
In general, Holos relies on the server operating system (Windows NT, Unix
or Open VMS) and the security in the target database to ensure access
controls. All Holos models can be defined as read-only or hidden from users.
Tighter controls can be built using the Holos language. For example, alias
definitions of structures can be used to restrict user access to parts of the
model.
Query monitoring
Holos provides query and application monitoring facilities. Statistics can be
collected and analysed on usage of models, reports and other system
components.
Management of data
How persistent data is stored (not scored)
Holos is a hybrid OLAP tool that supports both relational database and
multidimensional database storage options.
Scheduling of loads/updates
Data loading can be scheduled by attaching a loading script to an Agent that
defines the process for updating the models and applications with fresh data.
There is no point-and-click support for defining these schedules.
Event-driven scheduling
Agents can be defined that automatically watch for events such as an update
to the database or the creation of a new file, and subsequently execute a
schedule.
Failed loads/updates
Holos automatically generates a comprehensive log of all load processes.
Holos language scripts determine how failed uploads and updates are
handled on a per-application basis. Agents can be linked to scripts to notify
administrators of failed loads and updates via e-mail.
Distribution of stored data
Stored data can be distributed across multiple servers for storage. For
example, it is possible to store a multi-year time series as a set of ‘annual’
structures, each of a different type, and each stored on separate servers.
Holos provides facilities to calculate structures held in different servers.
Sparsity (only for persistent models)
Developers can specify sparse data processing algorithms according to the
type of structure and its degree of sparsity. Sparse disk structures are
indexed using a hash algorithm to provide optimised handling of sparse data
sets.
Methods for managing size
Holos’s smart consolidation facility can be used to calculate some values on
demand only. Developers can decide which values to precalculate, and which
to store for each model.
In-memory caching options
Various aspects of the server cache, such as bucket size, can be configured to
optimise performance.
Informing the user when stored data was last uploaded
A log is created each time data is loaded into a structure. The information in
the log can easily be displayed in client applications. However, there is no
support for accessing upstream load process metadata from the data ware-
house.
Management of users
Multiple users of models with write facilities
Multi-user locking is automatic in relational structures. A toolkit is provided
to help manage multiple update users on a disk-based structure. However,
simultaneous user locks must be programmed in the Holos language.
User security profiles
User security profiles are defined using the Holos language. Profiles can be
assigned to individual users or groups of users.
Query governance
In the relational environment, the size of data blocks returned from the
database can be controlled.
Restricting queries to specified times
There is no support provided for restricting queries according to time.
Management of metadata
Controlling visibility of the ‘road map’
Developers can define the complete user environment, or set up application
development groups that restrict access to specific Holos components.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
Apart from dimension and measure information, there is little model
metadata to synchronise in Holos.
Impact analysis
There is no support for impact analysis.
Metadata audit trail (technical and end users)
Holos does not provide any direct support for an audit trail. However, infor-
mation stored in scripts while creating and modifying the structures can
easily be logged to an external file for analysis.
Access to upstream metadata
There is no integration with externally generated metadata.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
Holos provides strong tunabaility features for both MOLAP and ROLAP
operation. For ROLAP mode, Holos supports the generation of multipass SQL
and native SQL access to all the major relational databases. For MOLAP
configurations, multidimensional structures can be loaded incrementally. The
loading and precalucation of data can also be distributed across multiple
processors simultaneously using SMP technology.
ROLAP
Multipass SQL
Holos automatically generates multipass SQL.
Options for SQL processing
The processing of SQL can be carried out either on the server or the data-
base.
Speeding up end-user data access
Data access can be speeded up by caching queries on the server in an
optimised form. Users can access the cache for matching queries. The cache
can be defined to constantly refresh itself, though end users are not auto-
matically informed of its currency.
Aggregate navigator
The relational structure provides a caching mechanism that stores aggre-
gates on the server. This cache can be stored and reloaded as required.
MOLAP
Trading off time/size and performance
Large logical structures can be incrementally loaded. The underlying physi-
cal structures can be independently loaded or refreshed. Once the load is
complete, the multi-cubes can be ‘snapped’ back into the main compound
cube. Scripts can be developed to re-calculate only those values that have
changed during a refresh.
Processing
Use of native SQL to speed up data extraction
Holos uses native SQL access to all the major RDBMSs. ODBC drivers are
supported on certain Unix platforms.
Distribution of processing
The loading and pre-calculation of data can be spread across many proces-
sors; either inside a single multi-processor system or across a loosely clus-
tered network of machines.
SMP support
Holos makes full use of SMP parallelism.
Other performance tunability features
All the structure types have associated tuning mechanisms and can be
modified at runtime. Holos can alter the sparsity algorithms used to calcu-
late and consolidate data dynamically. This means that a structure can be
sparse at calculation time, and defined dense at runtime so that different
access methods can be employed.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
The Worksheet cannot be configured to provide a restricted interface. How-
ever, the Desktop Designer tool provides a point-and-click method for cus-
tomising the functionality of Holos applications, desktops and workspaces.
Ease of producing EIS-style reports
EIS-style reporting applications can easily be built using the graphical
development tools.
Applications
Simple web applications
A toolkit is provided to develop applications that run through a web browser.
A transformer utility is provided to convert Holos language scripts into
HTML, and present them to web users as a series of dynamic HTML pages.
Web applications can access all the functions of the Holos server, but do not
support as many features as Holos client applications.
Development environment
Holos provides a flexible and productive development environment. Easy-to-
use GUI tools allow most development work to be done via point-and-click. A
full programming environment for the Holos language is provided for writ-
ing custom procedures and linking them into applications.
Use of third-party development tools
Holos does not integrate with external development tools.
Other customisation features
Holos has strong support for localisation; a single OLAP application can be
written to support different language interfaces including German, French,
Italian, Spanish, Portuguese and Japanese character sets.
Holos can also act as an OLE2 client.
Deployment
Platforms
Client
Holos clients run on Windows 3.1, Windows 95, Windows NT and Macintosh.
The Seagate Worksheet runs on Windows 95 and Windows NT and Java-
based web browsers.
Server
The Holos server runs on Windows NT, DEC VMS and the following Unix
flavours: HP-UX, AIX, Sequent, IRIX (Silicon Graphics and SGI), SunOS,
Digital, AT&T, Pyramid, ICL DRS/NX and SGI Siemens Nixdorf.
Data access
Holos provides native access to the following RDBMSs: Oracle, Informix,
Sybase, Red Brick, Teradata, Rdb, Ingres, IBM DB2/6000 and HiRDB. Holos
also supports ODBC (Windows NT and Unix) and can access data from
third-party multidimensional databases (Essbase and Microsoft SQL Server
OLAP Services).
Holos can also access transactional data from SAP and Oracle ERP applica-
tions and can load data directly from Lotus Notes databases and flat file
systems. Links to external information providers, such as online news
services, are also supported.
Standards
Holos supports Microsoft’s OLE DB for OLAP and Hyperion Solutions’
Essbase API. A third interface – the OLAP Council’s MDAPI 2.0 specifica-
tion – is also under consideration.
The Holos Open Client Interface provides integration with ODBC-compliant
applications.
Published benchmarks
Holos does not provide any published OLAP benchmarks.
Price structure
Because of the fixed-price element of the host-based server, costs per user
start a little higher than other OLAP tools, but can be substantially lower for
large numbers of users.
The entry point for Holos is around $80,000. This enables five concurrent
server connections, ten licensed desktop users and one application developer.
The price includes training for the developer and end users.
Sterling Software –
Eureka:Suite
Summary
At a glance ............................................................................................. 2
Terminology of the vendor ...................................................................... 3
Ovum’s verdict ........................................................................................ 4
Product overview .................................................................................... 6
Future enhancements .......................................................................... 18
Commercial background
Product evaluation
Deployment
Platforms .............................................................................................. 37
Data access .......................................................................................... 37
Standards ............................................................................................. 37
Published benchmarks ......................................................................... 37
Price structure ...................................................................................... 37
Evaluation: Sterling Software – Eureka:Suite Ovum Evaluates: OLAP
At a glance
Developer
Sterling Software (Business Intelligence Division), Eden Prairie, Minnesota,
USA
Versions evaluated
Eureka:Suite comprising of Eureka:Strategy 5.7.8, Eureka:Analyst 4.5,
Eureka:Intelligence 1.1, Eureka:Reporter 6.1.3 and Eureka:Portal 2.0
Key points
• A suite of tools that integrates ROLAP, multidimensional and query and
reporting systems through an Internet-based portal
• Servers run on Unix and Windows NT; clients run on Windows 3.1, 95, 98
and NT workstation. Web access is also provided
• Eureka:Suite integrates Information Advantage’s DecisionSuite ROLAP
tools and IQ Software’s SmartServer and Vision OLAP tools – both of
which were acquired by Sterling Software in August 1999
Strengths
• A comprehensive suite of business intelligence tools and services that are
easily accessible through a web-based portal interface
• Integrates a highly scalable ROLAP system – query processing is
automatically optimised between the server and the RDBMS
• Provides flexible scheduling, report sharing and messaging facilities that
are matched by few OLAP tools
Points to watch
• The ROLAP server runs exclusively on Unix
• The ROLAP server can only be accessed by Eureka clients, which are not
the strongest tools for highly specialised analysis
• Still considerable scope for integration between the back-end systems –
management of a Eureka system can be complex
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Calculations
A general term for any numeric fact that is included in a report. There are
three types of calculations:
• volumetric – stored values from the database fact tables
• calculated facts – are not stored but derived by the ROLAP engine
Ovum’s verdict
What we think
Before being bought by Sterling Software in 1999, Information Advantage
was one of the first OLAP vendors to introduce Internet portal concepts into
the business intelligence world. Eureka:Suite is its first attempt to integrate
two radically different product lines – DecisionSuite and IQ SmartServer and
IQ Vision, to offer a comprehensive business intelligence suite and, more
significantly, step away from its strict ROLAP stance.
Eureka’s portal approach has the potential to bring OLAP to a broader
audience. Users will either be delighted by the ease with which they can
access business intelligence data, or overwhelmed by the choice of tools and
breadth of information available – although users can easily personalise the
interface for specific information. However, the core strength of the product
remains its ROLAP capabilities (the focus of this evaluation), in particular
its ability to analyse large volumes of information with and high numbers of
attributes. Scalability is underpinned by a well designed server-based
architecture, including an object request broker and a proven ROLAP engine
that maximises the use of RDBMS technology while addressing the
limitations of SQL. The product’s flexible report scheduling, sharing and
distribution options are matched by few other ROLAP tools.
However, the ROLAP server is ‘closed’ in that it is not accessible as a data
provider to complementary third-party business intelligence tools. Users are
limited to Sterling’s own client offerings, which, although well integrated and
easy to use, are not necessarily ‘best-of-breed’. Sterling has committed to
supporting Microsoft’s OLE DB for OLAP standard, but has not yet
announced a date.
Under the portal interface, Eureka:Suite represents a collection of separate
business intelligence systems. There is still considerable scope for further
consolidation and integration and users should not underestimate the
management burden. The implementation of the ROLAP system can be
complex and any purchase decision usually involves a wider data
warehousing consideration. Customers without a data warehousing strategy
will almost always need to buy-in some consulting and migration assistance.
Large-scale rollouts can take between six and 18 months to complete.
When to use
Eureka:Suite is suitable if you:
• require access to corporate data stored in large, finely-tuned data
warehouses
• are already committed to a large-scale data warehouse strategy, or are
preparing for one
Product overview
Components
‘Eureka’ is the colourful brandname for a suite of tools that integrates
ROLAP, multidimensional OLAP, query and reporting systems – technologies
that were originally developed by Information Advantage and IQ Software.
The five main components of the Eureka:Suite are:
Eureka:Strategy version 5.7.8 – for server-based ROLAP analysis
Eureka:Analyst version 4.5 – for multidimensional analysis at the desktop
Eureka:Intelligence version 1.1 – for web-based, integrated query reporting
and analysis
Eureka:Reporter version 6.1.3 – for server-based production reporting.
All of these components publish their results to, and can be accessed through,
the Eureka:Portal (version 2.0), a web-based portal that provides a
personalised, single entry point to a broad range of business intelligence and
corporate data.
Figure 1 shows the primary functions of the components and whether they
run on the client or the server.
Eureka:Suite is one of the largest business intelligence suites on the market.
But for the specific purposes of this evaluation, we focus on the ROLAP and
multidimensional OLAP capabilities provided by Eureka:Strategy and
Eureka:Analyst. However, we do acknowledge (where appropriate) the
functionality provided by the other components.
Eureka:Portal
Eureka:Portal is a web browser interface that allows users to access OLAP
reports and other corporate information. It applies the same principles used
by consumer Internet portals (such as MyYahoo!) to provide a personalised
single point of access to business intelligence and other corporate data via
URL links.
Client Eureka:Analyst
Eureka:Portal
Eureka:Strategy
Eureka:Strategy is a Unix-based ROLAP server that processes client
requests against large data warehouses – typically to evaluate dimension
attributes for customer segmentation and inventory management
applications. The server carries out a significant amount of data processing
(joins, aggregations and calculations).
Eureka:Strategy uses an intermediary metadata layer to dynamically
generate SQL for a query, and delivers formatted content back to the
presentation tier. The metadata layer provides a business-oriented map of
the underlying database table structures, which automatically synchronises
applications with changes in the RDBMS. This information is stored in a
series of metadata tables, usually in the data warehouse. The metadata can
also map data stored in more than one RDBMS.
Eureka:Strategy includes a number of client components:
Designer
An end-user interface for defining ROLAP queries and new reports. Reports
can be enhanced by creating custom calculations directly from the Analysis
interface. A range of visualisation techniques are also provided.
Viewers
Provide a visual interface for analysing ROLAP models generated by the
ROLAP engine. Two types of ROLAP Viewers are available – for client-server
and a combination of HTML and Java:
• a client-server interface for casual users – it allows users to tailor reports
built with Analysis, or simply view predefined reports delivered by the
ROLAP server as part of a schedule or agent process
• a web interface – enables reports to be accessed and analysed from a web
browser. It is closely integrated with the ROLAP server (via CGI), with
reports dynamically generated in HTML.
Administrator
A client-server tool aimed at model designers, DBAs and systems
administrators in the ROLAP environment. Interfaces are provide for:
• creating, validating and maintaining the metadata tables
• administering and managing the ROLAP environment.
Eureka:Analyst
Eureka:Analyst is a multidimensional analysis tool targeted at analysts with
calculation-intensive analytical needs – typically financial forecasting. The
tool is based on IQ Software’s Vision software – a proprietary OLAP client
that loads multidimensional cubes from MDDB servers and holds them in
memory on the desktop for offline analysis. It links directly to Applix’s TM1
MDDB engine and OLE DB for OLAP-compliant data sources.
Eureka:Analyst can be configured for server-based and local desktop OLAP
capabilities:
• a standard mid-tier server architecture, where Applix’s TM1 Server or
Microsoft SQL Server 7.0 OLAP Services MDDB act as data providers.
Users can access cubes directly from these servers (via the TM1 API or
OLE DB for OLAP)
• a desktop-based OLAP architecture, which processes multidimensional
cubes that have been downloaded from the server. Only base-level data is
downloaded, compressed and stored in memory on the client resulting in a
small, but rigid cube structure. Because data is held locally on the client
this allows for on-the-fly calculations and aggregations to be performed
offline.
A key feature of Eureka:Analyst is its ability to create calculations on-the-fly
and write them back to multidimensional cubes. It uses TM1’s Excel add-in
component, called Perspectives, to provide this functionality. Analyst also
includes tools to generate TM1 cubes and store them in the TM1 Server. The
tool supports write-back capabilities to TM1 cubes; but it can only access
predefined Microsoft OLAP Services cubes.
Detailed evaluations of Applix TM1 and Microsoft SQL 7.0 OLAP Services
are included as separate reports in Ovum Evaluates: OLAP.
Eureka:Intelligence
Eureka:Intelligence is a Java-based tool that provides web-based integrated
query, analysis and reporting (WIQAR) functionality. It allows users to slice-
and-dice and drill into live data via graphical, interactive views including
tabular reports and charts. The product also lets users create
multidimensional OLAP reports and provides report scheduling facilities.
Eureka:Intelligence includes a client application and a server. The WIQAR
server is comprised of several main components:
i-cache
i-cache stores multidimensional data for OLAP processing. The cache holds
the information in a non-sparse form and can be accessed by multiple users.
The Connection Manager facility handles user and server connectivity.
Eureka:Reporter
Eureka:Reporter is a server-based reporting tool best used for creating,
scheduling and distributing batch reports from ODBC-compliant data
sources. The Eureka:Reporter server acts as a repository for storing reports
and report definition templates.
Eureka:Reporter integrates SQL query and reporting capabilities. The query
engine performs all SQL queries, condition filters, calculations and
aggregations. It runs directly against relational databases or third-party
operational datastores. The query capability also ‘doubles’ as a cube builder,
moving relational datasets into dimensional structures for OLAP analysis.
An integrated report writer uses the output from the query engine to create
‘active’ report documents. These reports act like web pages, and incorporate
‘hot objects’ that create drillable spots or hyperlinks to additional data
sources. In an OLAP context, hot spots can be used to drill to a different
level of detail by making a new request for data.
Eureka:Reporter has three client components:
Report Viewer
An end-user interface for accessing SQL reports. It includes a DDE interface
and a command line interface for submitting queries to the Report Server
and viewing reports. Report Viewers are available for client-server, ActiveX
and HTML.
Report Designer
Provides interactive tools to create queries, design reports, charts and
crosstabs. In addition, the Report Designer also includes the functionality
contained in the Report Viewer.
There are two main interfaces for creating reports:
QuickQuery – for creating ad hoc queries and formatting the results using
simple grouping, sorting and totalling functions
FreeForm – for more elaborate formatting and the inclusion of ‘hot’ objects
(that link to other documents).
Report Designer accesses data via a ‘Knowledge Base Manager’ – a metadata
repository that provides a business view of the database, the objects in them
(such as tables and columns) and the relationships (joins) between these
objects. Sterling provides out-of-the-box solutions that create and populate
metadata models for SAP, PeopleSoft, Baan and JBA transaction models.
Report Administrator
A client-server management tool for managing report metadata. It includes a
metadata management facility to allow DBAs to configure, define and
maintain access to multiple data sources. Report Administrator includes all
functionality found in the Designer and Viewer interfaces.
Architectural options
Using Eureka:Suite
Reports can be defined by DBAs, but can also be created and viewed by
business end users using the Analysis client. Experienced power users can
use this interface to enhance models by including their own custom measures
and filters. The ROLAP Viewer interface provides a simple interface for
‘information consumers’ that only require easy viewing access to reports
scheduled by the Eureka:Strategy Server.
Administrators are provided with separate ROLAP Administrator interfaces
for managing end users. It provides a number of graphical tools that enable
managers to configure user security profiles and govern database queries
according to time and the size of results sets returned from the RDBMS. The
ROLAP interface also provides the management interface, allowing
distribution schedules to be built and developing agents that ‘push’ results
directly to end users via alerts, e-mails or report attachments.
Creating a report
The Eureka:Strategy development philosophy is focused on the concept of
‘reports’ that users create by selecting dimensions from the metadata tables.
Reports are defined for or by users, sent to other users, scheduled by agents
or published through the Web. Each report is based on a report template that
defines its layout, content and properties. Report templates are created in
the Template Editor. This is shown in Figure 4.
ROLAP designers can readily create templates, but ROLAP users can modify
them by changing the layout of dimensions or including different dimension
members. Typically, templates will be created for standard reporting
requirements in an organisation, such as a market share summary or
product ranking.
Users can define new TM1 cubes from datasets returned from
Eureka:Reporter. A TM1 Cube Creation Wizard, shown in Figure 5, is
provided to select measures, dimensions, create filters and so on. This facility
is intended to support the ad hoc creation of small and simple cubes for local
desktop analysis. Cube creation is therefore provided as an adjunct to the
core ROLAP and reporting environments, rather than to support complex
multidimensional modelling requirements.
As users create the TM1 cube, they simultaneously create a ‘design
document’ that is referenced every time the cube needs to be updated with
fresh data. TM1’s Perspectives interface is an Excel add-in tool that is used to
access and analyse the cube in a spreadsheet-like matrix. Standard OLAP
functions such as slice-and-dice, drill-down and traffic lighting are provided.
Distributing reports
Support for the sharing of information between large numbers of users is an
important element of the Eureka:Suite architecture. A number of features
within the product promote the easy sharing of reports.
For example, the user interface for all the Eureka:Strategy client tools is
based on the notebook metaphor of a ‘portfolio’. A portfolio is made up of a
number of tabbed pages.
The first page is always the ‘alert page’ and lists any alerts received, with a
short description of each one. An alert might notify a user that a scheduled
report has been completed, or it might have an attached report sent by
another user. Other pages in a user’s portfolio are used to organise reports in
an efficient manner; they can be set up according to each user’s preferred
way of working. A portfolio can include folders shared by a workgroup.
For example, a message icon is provided on a standard toolbar across all the
client tool interfaces to allow end users and administrators to easily define
alert messages, attaching, if required, one or more reports. When the report
has run, an alert message appears in the portfolio of all the recipient users.
This is shown in Figure 6.
As well as sending reports to other users, developers and users can create
Agents to run reports. An Agent runs one or more reports and can deliver
alerts to one or more users when the report is completed. Agents can be
scheduled to run at set times or can be fired off by a specific trigger event,
such as the loading of the data warehouse.
Future enhancements
Portal enhancements
Eureka:Portal’s search engine will be enhanced and integration with third-
party search technology will be provided.
A staging area will be added to the Content Server to support a ‘submission-
and-review’ process for publishing content. This will include the provision of
content expiration and version control facilities.
Various functional enhancements to the web interfaces are planned using
Java applets and ActiveX controls.
ROLAP enhancements
Sterling plans a number of enhancements to the ROLAP system including
the provision of new web-based ROLAP Viewers and Designers that provide
full OLAP analysis and design functionality via the Web.
Eureka:Strategy will also be enhanced to support larger SQL statements and
advanced statistical and visualisation functionality by leveraging RDBMS
extensions such as RISQL and via integration with SPSS.
A Windows NT version of Eureka:Strategy and support for OLE DB for
OLAP as a data provider are planned for the long term – although Sterling
has not announced any concrete dates for availability as yet.
Scalability and performance
A number of enhancements are planned in this area, including:
• intelligent caching of large query results and metadata – the caching will
also implement security so the results can be shared across workgroups
• SQL optimisation – future releases of Eureka:Strategy will support the
‘packaging’ of SQL statements as objects for easy modification and re-use.
This will allow a DBA to build a custom ‘SQL Adapter’ that generates
optimised SQL for a specific ‘class’ of query
• load balancing and failover support – based on industry standard
distributed network processing technologies.
Development environment
Sterling aims to provide an application development environment. It is in the
process of re-structuring the API of its business intelligence tools to support
CORBA. This is expected to facilitate integration and the development of
custom applications from both customers and VARs.
Analytic applications
Sterling is developing a number of relationships with CRM and ERP vendors
to develop and market analytic applications for the customer relationship
management (CRM), enterprise performance management and healthcare
markets. Several applications have already been announced, with more
expected in 2000.
Commercial background
Company background
Customer support
Support
Multilingual, telephone hotline support is available through support centres
based in Atlanta (USA), London (England) and Sydney (Australia). On-site
support arrangements are also available. Some support is available via the
Web.
Training
A number of public and on-site training courses are provided for all Eureka
components. These include a one-day introductory course for casual end
users, a two-day course for analyst-type users and a four-day technical course
for IS developers and DBAs. Computer-based training is also available.
Sterling also offers an Online University.
Consultancy services
ROLAP implementations usually involve a wider data warehousing
consideration, and will require significant consulting. Sterling’s Professional
Services division provides a broad range of services for business intelligence.
Two service organisations are available as part of the company’s Business
Assessment and Solution Planning Service:
• business consulting – for strategic planning & business intelligence
opportunity and solution assessment and project & change management
• implementation consulting – consultants have expertise in data
warehousing, information portal development, implementing enterprise
reporting and analysis systems for specific vertical sectors.
Distribution
North America
Sterling Software
Business Intelligence Division
7905 Golden Triangle Drive, Suite 109
Eden Prairie
MN 55344
USA
Europe
Sterling Software
International Business Intelligence Division
Sterling Court
Eastworth Road
Chertsey
Surrey, KT16 8DF
UK
Asia-Pacific
Sterling Software
Business Intelligence Division
Level 21, 201 Miller Street
North Sydney
NSW 2096
Australia
http://www.sterling.com/eureka/
E-mail: bid-marketing@sterling.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
Eureka:Suite is not a product that can be used out-of-the-box. But it is not too
difficult to master and is flexible enough to cater for a range of users. The
portal interface provides a consistent and manageable interface to business
intelligence data, allowing users to easily navigate to higher levels of
functionality as required. Excellent support is provided for dissemination of
reports and collaborative group working.
The ROLAP clients support an extremely intuitive notebook-style interface for
advanced analysis and reporting functions. ROLAP users can also benefit
from the portal’s intuitive publish-and-subscribe model and its strong
personalisation capabilities. However, one feature we would like to see is an
approval process that requires a manager’s ‘sign-off’ before any reports are
published into the system.
(skipping over one or multiple levels) – and each drill results in data
retrieval from the database. Drilling can occur on any dimensional element,
regardless of its positioning on the report (including multiple levels of
nesting within rows, columns or sections) and derived facts; but can only drill
on items for which fact values are extracted directly from the database or
defined within the metadata.
Eureka:Analyst provides a similar range of OLAP functions, but through a
spreadsheet-like interface.
Changing the position of members in a dimension level
Eureka:Strategy users can change the location of dimension members
(including rows, columns or blocks of data) in a report using drag-and-drop.
Eureka:Analyst uses the TM1 dimension and measure hierarchy functions.
It is possible to add calculated members on-the-fly, but users cannot change
hierarchies or reposition members.
Visualising the drill-down hierarchies
ROLAP users are provided with a pop-up ‘map’ to show the levels of
hierarchies available for a dimension and identify the current level with a
check mark. Users can also jump to specific levels in the dimensional
hierarchy.
Drilling-down to detailed data
Users can drill-down to access detailed transactional data directly from the
report interface. Eureka:Strategy does not differentiate between aggregated
and detailed data; the same user interfaces are used and the same processing
is performed. The data does not have to undergo special preparation to be
accessible at detail level.
Eureka:Analyst also supports ‘drill-through’ capabilities to source relational
database.
Range of front-end user tools
Eureka:Portal provides access to a number of front-end tools for client-server
and web-based reporting and analysis. These include all the Eureka Viewers,
including a custom-built OLAP client for analysing TM1 and Microsoft OLAP
Services multidimensional cubes.
Visualising the results
Report content can be visualised in multiple variations; a default mode is
provided to automate visualisation upon initial access of the report. Users
can easily select and chart data from within reports. The charting tools
support a range of business graphs, and a wizard facility is provided. Users
can simultaneously display multiple tables and charts in a report. But it is
not possible to drill-down or rotate dimensions from within a chart.
Integration with MapInfo is provided for visualising data in maps.
Eureka:Reporter also has the ability to define ‘hot objects’, which simulate
web hotlinks and look and act like web pages. Users can link from one
document to another, reach a deeper level of detail by drilling through the
underlying data or page through a briefing book of related documents.
Reports can easily be defined from scratch or using templates. It is possible to
embed images, video, sound or OLE objects in reports.
Publishing a report
The Report Caster component automatically publishes and distributes
reports to end users based on either an individual or workgroup basis; public
and dynamically defined distribution lists are supported. Narrow casting
functions, which limit publication to specific users based on their personal or
workgroup exceptions, are also supported.
Reports can be published directly to the Eureka:Portal for access and
distribution to other users. Users can choose whether the content can be
viewed by every user, specific groups or just by the publisher. Analysts can
create a set of saved analysis views (briefing books) to be shared with other
users.
Targeted distribution via e-mail
Reports can be distributed via e-mail from the client tool interface.
Eureka:Portal’s Messenger feature can also be used to schedule and
automatically deliver reports stored in the repository via e-mail – or any e-
mail addressable device or channel (including digital phones, pagers and fax
machines).
Eureka:Strategy uses the Unix mail system to distribute reports – address
lists set up within Unix mail may be used, but these cannot be generated
dynamically. Users can send and receive compressed cubes on their desktop
machines via e-mail.
Eureka:Analyst users can receive cubes via e-mail, and download them onto
the desktop in memory for disconnected analysis.
Subscribing to reports
Users can easily subscribe to ‘channels’ to control which reports they receive.
Newspage is a particularly useful tool to specify the scheduled delivery of
reports and other information pertinent to a user’s daily tasks.
Summary
1 2 3 4 5 6 7 8 9 10
A Eureka report is just one perspective on the business model. Much of the
work is done beforehand when defining a business-oriented map of the
underlying database table structure (metadata tables). This allows developers
to build a logical business model to simplify end-user construction of reports.
The business model is flexible, and the use of filters and calculations allow for
considerable adaptability. A wizard-driven interface guides designers through
the process of describing complex drilling hierarchies and aggregation table
information. However, a diagrammatic editor would ease the task of setting
up and managing the metadata tables.
Eureka:Analyst also provides modelling capabilities to build TM1 cubes ‘on-
the-fly’ for desktop analysis. These models created are considerably smaller
and less flexible than the ROLAP cubes, and are not the primary focus of this
evaluation section.
Basic design
Design interface
Eureka:Strategy’s ROLAP Administrator provides a graphical interface for
mapping the data warehouse structure onto metadata. This interface
displays the metadata in spreadsheet-style tables. The ROLAP
Administrator is adequate for this task, but it would be better if there was an
overview of the main elements, rather than just a series of tables. It would
also help if it included dialogues and pick lists to help with the maintenance
of the metadata. The wizard provides dialogues and pick lists during the
metadata creation.
Reports (sets of dimensions, calculations and filters) represent the business
model. The design interfaces for both metadata and reports share the same
general style of interface.
Visualising the data source
ROLAP designers can see a sample of data from a selected relational table.
However, they cannot view the overall database schema.
Universally available mapping layer
Metadata tables can be defined to map dimensions, measures and
hierarchies to specific parts of the data warehouse. Categories provide end
users with a restricted view of the metadata tables.
Prompts for metadata
ROLAP designers are not automatically prompted to add additional
metadata when creating the metadata tables or defining reports.
Multiple designers
Multiple designers
The tools do not provide any special support for multiple designers.
Support for versioning
There is no direct support for versioning control.
Summary
1 2 3 4 5 6 7 8 9 10
User-definable extensions
The tool provides the ability to define advanced, custom calculations on-the-
fly with regard to any dimension or attribute. These calculations can be
saved back to the MDDB, so that other users can use them.
A scripting language can also be used to create ‘add-ins’ that integrate with
third-party products (such as SPSS) to access advanced analytical functions.
However, this requires programming.
Data mining
Eureka:Suite does not provide any support for data mining.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
All the Eureka tools are becoming increasingly web-enabled. But rather than
simply transferring client-server functionality onto a web browser, the product
is based on Internet portal standards to provide access to a business
intelligence data. While this approach has its merits, it places restrictions on
the level of ROLAP functionality that can be effectively deployed through the
portal.
The two ROLAP Viewers provide strong web access to view predefined reports
or analyse models respectively. However, web users cannot define new reports
or add new filters or calculations to the report definitions. Reports can be
easily published and distributed to a wide range of users over the Web using
Internet technology, including hyperlinking and e-mail. Additionally,
Eureka:Intelligence provides web users with a flexible and sophisticated
query, reporting and analysis functions. There is no web support for
Eureka:Analyst or the ROLAP Designer tools – though this is planned for the
future.
Management
Summary
1 2 3 4 5 6 7 8 9 10
objects and report updates and can be based on time, date or event triggers. As
expected from a ROLAP tool, there is strong support for query monitoring and
governance, and produces detailed usage statistics.
Management of models
Separate management interface
Several interfaces are provided to manage the back-end servers. For ROLAP
administration, Eureka:Strategy provides two graphical interfaces that are
similar in design: one is used for maintaining the metadata tables; the other
is for administering application components, report objects and end users.
Eureka:Analyst relies on the management capabilities that are provided as
part of the Applix TM1 product.
Security of models
The security of ROLAP models is governed from a multi-level security model
based on Unix, metadata and the RDBMS security systems. All models have
associated properties that govern read/modify access.
There are three levels of security provided by Eureka:Analyst:
• data in the cube (database security)
• access to the cube (Eureka:Portal security)
Management of data
How persistent data is stored (not scored
Eureka:Strategy processes data directly from the RDBMS and creates
multidimensional models at runtime, which are cached on the server.
However, once a report has been defined, the data can be stored persistently
on the ROLAP server or any other application server, and can be periodically
refreshed for current data.
Eureka:Analyst stores multidimensional cubes locally on the desktop, or on
the Applix TM1 server.
Scheduling of loads/updates
The loading of data into the data warehouse is outside the scope of the
Eureka:Suite. Once it has been loaded and stored as part of a report
definition, a scheduler facility can be used to automate the refresh of reports.
Scheduling can be based on times, dates or events. Users can apply a refresh
schedule to a group of reports. Desktop cubes can also be refreshed by re-
querying the data source.
Management of users
Multiple users of models with write facilities
Eureka:Suite is designed to permit simultaneous read-only access.
User security profiles
Role-based security can be applied across the entire system including all
objects in the repository, business rules in metadata and data.
Management of metadata
Controlling visibility of the ‘road map’
The category definition controls access to the metadata that a user can
access. This definition determines the model metadata, calculations and
filters that can be included in a report for a particular user or groups of users.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
Eureka:Strategy provides a ‘validation’ feature to ensure that metadata
categories are synchronised with the metadata tables each time a report is
created.
Impact analysis
The tools do not provide support for impact analysis.
Metadata audit trail (technical and end users)
There are no metadata audit trail facilities.
Access to upstream metadata
Eureka:Strategy integrates with Informatica’s Metadata Exchange
architecture – enabling developers to view extraction and transformation
metadata about the columns in the data warehouse that provide the data for
the model. Sterling has also joined Ardent Software’s (now taken over by
Informix) MetaConnect Co-operative for metadata integration.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
Eureka:Strategy automatically generates multipass SQL statements.
Options for SQL processing
An important feature of Eureka:Strategy is its ability to intelligently balance
SQL processing between the ROLAP server and the database. Processing
options include:
join processing, which eliminates full outer joins of large tables
aggregation processing, to support advanced totalling across multiple
dimensions
calculation processing, which eliminates the use of temporary database
tables and provides support for non-SQL calculations.
Speeding up end-user data access
The ROLAP server’s data cache is volatile, and can be accessed to maximise
data access times.
Aggregate navigator
Eureka:Strategy automatically accesses the highest level aggregate tables in
the database to fulfil a ROLAP request and minimise response time. It
calculates the Cartesian cross-product of dimensional data models, which
then produces aggregate-level priority information.
MOLAP
Trading off load time/size and performance
Eureka:Analyst is a multidimensional OLAP system based on TM1. It
compresses and loads only the base data into memory on the client machine.
All aggregations and summaries are performed on-the-fly as requested. Data
and subsequent calculations are stored in a very efficient manner for
enhanced performance.
Processing
Use of native SQL to speed up data extraction
Eureka:Strategy uses native SQL interfaces to connect to all the major
RDBMSs. It also uses ODBC for Unix to connect to Red Brick, Teradata and
HP-Intelligent Warehouse data warehouses.
Distribution of processing
A client request is automatically routed to the least utilised ROLAP server
for processing. There is no automatic load balancing between these servers,
because each functions independently. It is, however, possible to balance
processing loads between the database and ROLAP servers.
SMP support
Eureka:Strategy takes full advantage of SMP technology.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
Customisation
Option of using a restricted interface
Various aspects of the client tools’ interface can be modified to provide
restricted or extended views and functionality.
Ease of producing EIS-style reports
Eureka:Strategy’s Administrator interface provides an add-in facility to
define pre- and post-process operations in reports. Typically, these are calls
to an external procedure, such as a Windows application or a Unix shell
script, that are used to customise the execution or results of a report, or add
new capabilities such as EIS displays.
Applications
Simple web applications
A web gateway API is provided for the development of simple EIS interfaces
in HTML or JavaScript.
Additionally, Eureka:Portal exposes an XML-based API that can be used by
ISVs or VARs for custom development.
Development environment
There is no visual development environment. However, Eureka:Strategy
provides a scripting language for defining procedures for interaction with
external systems or data. The scripting language – a cross between Visual
Basic and Unix shell scripts – uses the standard Unix ‘vi’ editor.
Use of third-party development tools
Eureka:Strategy’s client DLLs can be called by development tools such as
Visual Basic, PowerBuilder and Visual C++.
Deployment
Platforms
Client
Eureka:Suite’s client components run on Windows 95, Windows 98 and
Windows NT. Web access is supported via Netscape, Microsoft and Mosaic
web browsers. The Report Viewers also provide support for Unix.
Server
Eureka:Strategy runs exclusively on Unix: HP-UX, AIX, NCR MP-RAS, SGI
Irix, Sequent Dynix, Sun Solaris, DG-UX, DEC Unix, Siemens Reliant and
Unisys SVR4.
Eureka:Analyst supports Windows NT and Unix.
Eureka:Portal runs on Windows NT and Unix (Sun Solaris, HP-UX and AIX).
Data access
Eureka:Strategy provides native access to Unix-based RDBMSs only.
Databases supported include Oracle, DB2, Sybase, Informix, Tandem and
MDI. ODBC for Unix database drivers are also supported to provide access to
Teradata, HP-Intelligent Warehouse and Red Brick and other non-Unix
sources.
Eureka:Analyst can access multidimensional data held in Applix TM1 and
Microsoft SQL Server 7.0 OLAP Services MDDB servers.
Eureka:Reporter runs against all the major RDBMSs. It also includes access
to transactional/ERP databases and SPSS databases.
Eureka:Portal can connect to any JDBC-accessible database. Connection is
required only for the portal repository and any data source can be linked (via
URLs) as ‘content’ to the portal.
Standards
Eureka:Suite has its own proprietary client and server APIs. The Viewers
support standard HTML, Java and JavaScript.
Published benchmarks
Sterling has not published any OLAP benchmarks for its Eureka products.
Price structure
Pricing for Eureka:Suite varies, depending on the type and number of client
and server components licensed, and the level of functionality that is
required. This ranges from:
• $100 per user, for a simple portal implementation with basic query and
report viewing capabilities
• up to $1,000 per user, for advanced OLAP design, analysis and reporting.
The pricing structure for the ROLAP system is aimed primarily at large-
scale enterprise deployments. Entry-level pricing usually starts at $75,000.
Summary
At a glance ............................................................................................. 2
Terminology of the vendor ...................................................................... 3
Ovum’s verdict ........................................................................................ 4
Product overview .................................................................................... 6
Future enhancements .......................................................................... 16
Commercial background
Product evaluation
Deployment
Platforms .............................................................................................. 34
Data access .......................................................................................... 34
Standards ............................................................................................. 34
Published benchmarks ......................................................................... 34
Price structure ...................................................................................... 34
Evaluation: WhiteLight Systems – WhiteLight Analytic Application Server Ovum Evaluates: OLAP
At a glance
Developer
WhiteLight Systems, CA, USA
Versions evaluated
WhiteLight Analytic Application Server, version 2.0
Key facts
• A ROLAP-oriented tool that provides a multidimensional cache for OLAP
calculations and includes a component-based development environment
• Server runs on Windows NT and Solaris; clients run on Windows 95,
Windows 98 and Windows NT
• The WhiteLight OLAP products are designed to support ‘Integrated
Decision Processing’ – where modelling, analysis and integration
functions are hosted on an analytic application server
Strengths
• Advanced predictive modelling techniques for financially-oriented ‘what-
if?’ analyses
• Sophisticated metadata exploration and model auditing tools
Points to watch
• Requires a clean data source – WhiteLight does not provide any scheduled
ETL functionality of its own
Ratings
1 2 3 4 5 6 7 8 9 10
Web support
Management
Adaptability
Performance tunability
Customisation
Data map
A set of properties for elements in a measure dimension that tells
WhiteLight where values in a database are found. For example, it specifies
the database, table and columns to retrieve values from.
Elements
Elements are subordinate members in a dimension hierarchy. There are two
types of elements: qualifiers, which represent objects such as customers, and
measures, which represent quantitative values such as sales.
Infospace
A set of dimensions and elements that represent a set of useful cells in the
model for a particular query. Typically, users create an infospace in a
worksheet to provide an initial view onto meaningful data.
Model
A multidimensional business model that users create to represent corporate
data for a specific aspect of their business. Physically, a model is a file stored
in the repository and has one or more worksheets associated with it.
Rules
Used to derive the values for cells in a model. Rules are attached to elements
in the model and can be a formula, a data map or a UEV.
Schema
The WhiteLight schema specifies the databases, tables, columns and joins to
be used by models.
UEV
User-entered value. A rule created by a user in a worksheet cell that is not
generated from the source database or calculated as the result of a formula.
UEVs are commonly used to change the values in a model for ‘what-if?’
analysis.
Worksheet
A spreadsheet-like view of model data presented in tabular form. Worksheets
are used for interactive OLAP analysis and reporting. Multiple worksheets
can be associated with a model to provide different views of data held in the
model.
Ovum’s verdict
What we think
WhiteLight is best described as a ROLAP-oriented MDDB tool that provides
a strong front-end to large data warehouses. The tool scales like a ROLAP
tool, but also benefits from a shared multidimensional cache for enhanced
performance. This hybrid architecture easily lends itself to large multi-user
deployment. The provision of a component-based development environment
and OLE DB for OLAP support also opens up the product to third-party
development and integration. What sets WhiteLight apart from a
straightforward OLAP server is the provision of a middle analytic application
layer that hosts modelling, analysis and data integration features based on a
set of CORBA services.
WhiteLight currently has a lead over competitors with respect to its
modelling capabilities. The tools support the creation of flexible business
models that contain highly granular business rules and calculations that can
behave differently, depending on the context they are used in. A unique
feature is the metadata exploration capabilities, which allow users to gain a
better understanding of a model’s underlying business logic. Another key
strength is its predictive modelling capabilities – models are highly
adaptable to change and can be tested under different scenarios. The product
is therefore well suited to dynamic environments in which high volatility
tends to be the order of the day, where the underlying customer base can
quickly change, and whose analysts are comfortable with this complexity and
uncertainty.
WhiteLight is targeted at ‘complex ROLAP’ applications and is particularly
well equipped to deal with the requirements of sophisticated financial
analysis, particularly credit risk management and profitability analysis.
Clearly, the product is most at home in the financial sector, and is most likely
to expand its sales into that particular territory. But WhiteLight’s
consultancy partners have the expertise required to develop modelling
components relevant to some strictly non-financial problem domains, such as
database marketing, brand management and customer churn analysis.
WhiteLight does not offer an out-of-the box web client. Instead, it provides re-
usable components to develop custom web-based analytic interfaces.
Although this shortens application deployment cycles, web users are
restricted by the level of functionality built into the components – the range
is limited, but expanding.
WhiteLight does not provide a scheduled ETL capability. This means that the
WhiteLight server is either pulling data off single operational sources at
runtime (with associated overheads and lack of integration) or accessing a
pre-integrated data warehouse. If WhiteLight is to be a long-term player in
the OLAP market, it will need an ETL integration strategy.
We also have reservations about the market opportunities for WhiteLight’s
products. The WhiteLight tools are mainly targeted at specialised analysts
within large corporations – a relatively small group. Although the analytic
applications market is set to grow significantly, the number of people
requiring access to more complex analytics will not grow proportionately.
When to use
WhiteLight is suitable if you:
• want to support financially-oriented applications – particularly credit risk
management and profitability analysis
• want to build large, complex business models, and be able to understand
and adapt them easily
Product overview
Components
WhiteLight Analytic Application Server 2.0 consists of the following
components:
• WhiteLight Analytic Application Server
• WhiteLight Workbench
• WhiteLight Excel Add-In
• Application component environment (ACE).
Figure 1 shows the primary functions of the components and how they relate
to client-server systems.
Application server
The server also provides structured application-level functionality on top of a
basic set of CORBA-based services – similar to an application server.
However, in the case of WhiteLight, it also manages the execution of complex
structured business rules alongside the manipulation of different objects
(numbers, text and images) within a multidimensional framework. This
functionality forms the basis of WhiteLight’s ‘integrated decision processing’
approach (see Using WhiteLight section).
WhiteLight Workbench
WhiteLight Workbench is the client-side tool for:
• business modelling and OLAP analysis
• administering the WhiteLight Analytic Application Server.
It consists of a number of tools for end users and DBAs.
End-user tools
For WhiteLight end users (analysts), the following tools are provided:
• Modeller’s Workbench – a graphical tool for building analytical models.
The modelling functionality includes ActiveRules, which enables analysts
to model complex business problems using simple drag-and-drop methods.
• Cell Explorer – a metadata exploration tool for viewing an individual cell’s
value, address, parents and children
• Model Audit – an interface for creating a textual description of the
elements and logic of a business model
• Worksheet Tool – a graphical interface for creating and analysing
worksheets for a multidimensional model. Worksheets provide a
spreadsheet-like view of the model and allow users to navigate through
models and perform OLAP functions
• Worksheet Filter – provides filtering, sorting and ranking commands to
narrow the data displayed in a worksheet
• Charting Tool – enables analysts to visualise multidimensional
information in a number of graphical formats
• Query Builder – enables advanced users to create query scripts that are
submitted to the server. Queries are composed in HQL (Hypercube Query
Language), a proprietary SQL-like multidimensional query language.
Administration tools
For WhiteLight administrators, the following tools are provided:
• Server Console – an interface for monitoring client connections and
setting cache parameters on the WhiteLight Analytic Application Server
Architectural options
Division of roles
WhiteLight has a number of components that are employed by four different
types of user:
• administrators – install the WhiteLight Server and WhiteLight
Workbench clients. They use the WhiteLight Workbench client for server-
related tasks, including server configuration, cache management and
creating and maintaining WhiteLight schema for bringing in source data.
They are also responsible for setting user, group and model security
MultiCache
The most unusual feature of WhiteLight is how it exploits a
multidimensional cache (called MultiCache) to perform complex OLAP
calculations.
MultiCache is RAM-based, and is designed to provide interactive response to
queries. For example, a typical usage scenario might be when the detail data
is managed by the RDBMS and retrieved quickly. High-level aggregates are
stored on disk and managed by MultiCache, resulting in very quick response
times. Finally, the mid-level aggregates are calculated on-the-fly. The cache is
‘self-tuning’ – as more mid-level aggregates are requested and then cached,
MultiCache gets faster. MultiCache retains the most-requested information,
while removing less frequently-requested data.
MultiCache operates in the same way as an MDDB, and can either be pre-
loaded for predictable performance or populated dynamically as data is
requested. Multiple users can share a single instance of a model’s cached
values in memory. However, scalability will be limited by the amount of RAM
available.
Metadata exploration
A valuable feature of WhiteLight is its support for guided metadata
exploration and its model validation. It provides end users with a number of
graphical tools for verifying calculation accuracy inside models.
Cell Explorer
WhiteLight’s Cell Explorer tool allows end users to graphically navigate
metadata from complex and highly-derived calculations in the model, by
examining rules, relationships and calculations of data from its point of
origin throughout its database history. It is designed for analyst users that
need to understand the underlying business rules and logic used to calculate
data and the sources of the data.
As Figure 3 shows, Cell Explorer shows the current cell address in the centre
of the window, the children of the cell on the left and the parents of the cell on
the right. Selecting a different dimension in the cell will change the children
and parents to those in the selected dimension. Selecting a child or parent
will navigate through the model, making that member part of the centre cell.
This enables users to rapidly explore the model. The lower part of the
window contains rules that are used to calculate the cell, as well as a textual
description of the currently-selected element.
Formula bar
The worksheet formula bar quickly shows how cells are calculated. End users
can see exactly how data values in a business model are derived on a cell-by-
cell basis.
Model Audit
A built-in feature that documents existing models and is used to track
changes to models. Users can create and view a complete textual description
of any model using the Model Audit interface. The Model Audit window, as
shown in Figure 4, displays dimensions, hierarchies, rules, infospaces and
database structures that comprise a model. It also provides details on model
ownership and currency information (such as when it was created or last
modified).
WhiteLight provides facilities for the verification of calculation accuracy, by
automatically performing consistency checks for cell errors and potential rule
conflicts.
ActiveRules
WhiteLight’s patented ActiveRules technology allows it to build context-
sensitive models, which include complex business calculations that behave
differently depending on the context of their use.
ActiveRules consists of two components – ‘rules’ and the ‘rules compiler’.
WhiteLight models consist of business ‘rules’, which determine how
information is derived for cells in the model. These rules are automatically
processed by a dynamic compiler facility to create a runtime model.
WhiteLight rules are ‘active’, because they are instantiated objects that know
about one another in a semantic network. For example, the way ‘gross
margin’ is calculated is different from how the percentage variance is
calculated, so these elements have different rules. However, ‘gross margin’
could be calculated differently for products and customers depending on the
context of the analysis. The intelligence needed to determine how to calculate
rules for any given cell in the model is automatically managed by the ‘rules
compiler’, which performs a semantic analysis of the rules to automatically
determine rule applicability and ordering.
Application builders are provided with a core set of Java and Active X
application components (called ACE Basics), which are used to underpin
analytic functionality in applications.
Ten ACE Basic components are currently provided (although further
components continue to be rolled out by WhiteLight). These are split into:
• non-visual components – Co-ordinator (for data connectivity and query
processing), Cube, Cube-Query and Probe
• presentation and analysis components – such as Graph, Grid, Selector,
Library Explorer and Forms, which support a range of OLAP analysis
filtering, sorting, ranking and navigation functions.
For specialised requirements, an ACE Component Development Kit (CDK) is
provided to create new components using the JavaBeans component model.
The CDK enables developers to create new components by extending off-the-
shelf or custom Java components with ACE methods. Components created
with the ACE CDK can then be re-used in any analytic application in a
similar way to the ACE Basics components. The ACE CDK is also compatible
with a variety of Java interactive development environments. Developers can
use components developed by third-party vendors such as KLGroup,
RogueWave, FormulaOne and ThreeDGraphics. These include components
with advanced functionality, such as geographical mapping (GIS) or
application-specific analysis.
ACE components can be dragged-and-dropped into a web page using
standard web page layout tools. Application developers do not need to
program linkages between components, neither do they need to know
CORBA, as WhiteLight provides the necessary class libraries.
ACE analytic applications are accessible via any JDK 1.1-compliant web
browser.
Future enhancements
WhiteLight Systems plans a number of new features that will start to appear
in the first quarter of 2000, including:
• support for server-side ACE components – these components currently
run on the client machine as Java applets. The aim is to provide a thinner
(HTML) client, which will enable access from a corporate intranet,
extranet or the Internet
• a layer of XML integration and communication facilities – allowing
navigation into XML-based document repositories
• relevance ranking on documents – including support for keyword
searches.
A number of new ACE components will also be delivered throughout 2000.
These include Drill Through, Cell Explorer, Calculated Write-Back,
Personalise, Animator (which graphically depicts the changes of data values
over a time period) and GIS.
Integration with third-party OLAP servers (via OLE DB for OLAP) and
Microsoft Repository (for common metadata access) are also planned for the
longer term. WhiteLight also plans to deliver several new applications for the
financial sector and additional vertical industries such as:
• electricity/power trading and risk analysis
Commercial background
Company background
Customer support
Support
The Technical Support Services group offers telephone hotline, e-mail and
web support in North America and Europe. Support is also available in other
territories through Sybase, VARs and business partners.
Training
Public and on-site training is provided for all WhiteLight modules, including
courses on modelling fundamentals, model building and systems
administration. A ‘train the trainer’ programme is also available.
Consultancy services
WhiteLight Systems provides a range of generic consulting services for
product integration, deployment and application design via its Solutions
Centre organisation. The Financial Solutions Group (FSG) provides a range
of professional services aimed specifically at the financial services industry.
Central to the FSG is the Enterprise Risk Architecture, a set of application
templates and financial models for profitability analysis and risk
management applications – Portfolio Management and Risk-adjusted
Profitability.
Around 20% of its revenues comes from consulting, but WhiteLight expects
the services side of its business to grow substantially.
Distribution
Head office
WhiteLight Systems (corporate headquarters)
Suite 100, 2191 East Bayshore Road
Palo Alto, CA 94303
USA
Tel: +1 650 843 3000
Fax: +1 650 843 3910
Europe
WhiteLight Systems
Unit 4, Bracknell Beeches
Old Bracknell Lane
Bracknell
Berkshire, RG12 7BW
UK
Tel: +44 1344 310070
Fax: +44 1344 310071
E-mail: info@whitelight.com
http://www.whitelight.com
Product evaluation
End-user functionality
Summary
1 2 3 4 5 6 7 8 9 10
Summary
1 2 3 4 5 6 7 8 9 10
One of WhiteLight’s greatest strengths is its support for all aspects of the
model-building process. Graphical and easy-to-use modelling tools are
provided to map data sources to multidimensional data schemas. Analysts
can use the schema to develop ‘base’ models that can easily be extended, by
applying complex and highly granular business rules and calculations. The
initial construction of ‘base’ model components is carried out up-front (without
end-user intervention). WhiteLight’s ActiveRules feature provides a repository
for re-usable components and a high-level interface for building models that
avoids the need for complex programming tasks.
Unlike traditional ‘black box’ approaches to business modelling, the tools
provide a range of audit and integrity features. Multiple designers are well
supported by a version-controlled repository. A bonus is the tight integration
with Sybase’s PowerDesigner data modelling tools.
Basic design
Design interface
WhiteLight uses point-and-click functions for almost all aspects of the model-
building process. Models are created by dragging-and-dropping re-usable
model components (such as dimensions and measures) in a graphical
environment.
Visualising the data source
The Schema Explorer can be used to view the underlying database schema,
but there is no support for data sampling.
Universally available mapping layer
There is no ‘universally’ available mapping layer. Instead, the ‘base model’
provides the initial mapping to disengage data analysis from underlying
relational database concepts.
Prompts for metadata
Model designers are not explicitly prompted to provide contextual metadata.
However, fields to capture such metadata are prominently displayed in the
design interfaces.
Multiple designers
Multiple designers
Other than standard model-locking mechanisms, there is no special support
for multi-designer environments.
Support for versioning
Multiple copies of models can be saved in a repository. This enables different
analytic applications to be managed on the same server, each utilising
different iterations of the same base-level model. However, this is not true
version control.
Summary
1 2 3 4 5 6 7 8 9 10
Simple regression
WhiteLight supports simple linear regression functions, such as slope,
intercept and forecast. Multiple regression techniques are also supported.
Time-series forecasting
WhiteLight provides support for lead and lags, enabling simple time-series
analysis functions. However, it does not support any advanced de facto time-
series forecasting algorithms.
User-definable extensions
The analytical capabilities of the WhiteLight Server can be extended by
defining and/or adding external analytical components (such as risk
management engines or specialised statistics) via the CORBA extensions.
These analytical components are re-usable and can easily be applied to
business models.
Datamining
There is no direct support for datamining. However, datamining analytics
(such as cluster analysis) can be integrated into applications as model
components.
Web support
Summary
1 2 3 4 5 6 7 8 9 10
Management
Summary
1 2 3 4 5 6 7 8 9 10
Management of models
Separate management interface
The Server Console provides the main graphical administration interface for
monitoring client connections, managing model and end-user security, and
setting cache parameters.
Security of models
The server supports database, report-level, cell-based and network-based
security. Read-write access to models (down to cell level) and the underlying
database can be granted to users and groups of users on a per-model basis.
Security ‘domains’ specify which cells can be seen or written back to. Model
owners and systems administrators can both assign security.
Security controls also benefit from WhiteLight’s ActiveRules technology, and
are automatically updated when a change to the business model is made.
Query monitoring
All OLAP queries are recorded in the WhiteLight server log and can be used
to generate usage statistics. The log file includes information about the SQL
generated, the associated cache entries, the author, and the date and time
the query was executed. However, administrators cannot edit the SQL
generated by the WhiteLight server directly.
Management of data
How persistent data is stored (not scored)
The data used for analysis is typically stored in a relational database, and
can thus be managed using standard database utilities such as backup,
import and export. WhiteLight does not store persistent versions of models in
an RDBMS; the WhiteLight repository, which stores models, metadata and
reports is proprietary.
Calculation results can also be cached persistently on the WhiteLight server
(in the MultiCache). The advantage of using cached data is that it speeds up
query times.
Scheduling of loads or updates
The WhiteLight server exercises no control over the loading or updating of
data into the data warehouse or any other source databases. However, third-
party tools can be used to upload the MultiCache after updates to the data
warehouse and to refresh reports.
Event-driven scheduling
Event-driven scheduling is not directly supported.
Failed loads or updates
WhiteLight has no control over the loading or updating of data into the data
warehouse, so there are no facilities for reporting failed loads or updates.
Distribution of stored data
A partitioned data feature allows data to be divided amongst multiple tables,
databases or WhiteLight Servers. Data can also be cached on clients, and a
mixture of caching options is supported.
Sparsity (only for persistent models)
Sparsity handling is not a major issue for ROLAP-oriented products such as
WhiteLight. It is typically handled by the RDBMS, where the majority of
processing occurs.
Methods for managing size
The WhiteLight Server places no limits on database size or dimensionality.
However, the size of the MultiCache is restricted to 2Gb. Administrators
cannot remove specific cache entries, but they can limit the overall size of the
cache and number of entries.
In-memory caching options
Administrators can determine how best to optimise the allocation of cache
memory resources; for example, specifying a maximum number of cache
entries.
Informing the user when stored data was last uploaded
Each WhiteLight model has a timestamp that shows when it was last
refreshed (typically, when the data warehouse is updated). This information
is easily accessible by end users.
Management of users
Multiple users of models with write facilities
WhiteLight supports multi-user write-back to the data warehouse by using
supported interface standards. WhiteLight relies on the underlying RDBMS
for transactional integrity for all updates, by ensuring that all cells on a
single update are batched. If a cell value was to fail, all cells would be rolled
back and an error message generated for all users. The MultiCache is ‘locked’
for all other users when a write-back is being made.
User security profiles
User security is performed graphically using the ‘users and groups’ interface.
Three levels of access can be specified – end user, analyst and administrator
– and each has increasing levels of services and capabilities. A graphical
interface is provided for assigning users to groups and defining model access
privileges.
Query governance
Query governance is enforced at the database level and relies mainly on the
facilities provided by the source RDBMS for query control. WhiteLight can
also restrict certain calculations by using rule ‘domains’ that restrict the
maximum number of rows a query can return to the server.
Restricting queries to specified times
It is not possible to restrict queries to specified times.
Management of metadata
Controlling visibility of the ‘roadmap’
There are no special features to control the visibility of metadata.
Adaptability
Summary
1 2 3 4 5 6 7 8 9 10
Metadata
Synchronising model and model metadata
There is no automatic support for keeping WhiteLight Schema and other
metadata descriptions synchronised with models. When items are removed
from WhiteLight Schema, models that refer to them cannot be accessed.
Impact analysis
There is no direct support for analysing the impact of a change across related
models. However, WhiteLight’s concept of ‘referential integrity’ restricts the
deletion of dimension elements that are referred to elsewhere in the model.
Metadata audit trail (technical and end users)
A metadata audit that shows the history of the metadata is not supported.
Performance tunability
Summary
1 2 3 4 5 6 7 8 9 10
ROLAP
Multipass SQL
WhiteLight supports the automatic generation of multipass SQL.
Options for SQL processing
Query processing can be optimally distributed between the WhiteLight
Server and the RDBMS – depending upon the type of analysis and available
resources. Around 30 SQL optimisation routines are provided to balance the
processing. The system also minimises the generated SQL complexity to
ensure compatibility with the RDBMS’s optimiser.
Speeding up end-user data access
The MultiCache can be used to cache some query results for faster
performance, which allows the need to re-visit data sources to satisfy queries.
In response to a new query, the calculation engine will check the cache and
use pertinent cached values before it performs additional data retrieval or
calculations. The MultiCache is ‘self-tuning’: the more it is used, the faster it
becomes (provided there is substantial re-use of data between subsequent
queries).
Aggregate navigator
An aggregate navigator is built into WhiteLight’s query optimiser tool. The
aggregate navigator employs database row counts that direct queries to the
smallest table that can resolve the query. WhiteLight automatically finds the
‘next best fit’ if there is no aggregate table with the exact level of aggregation
available.
MOLAP
Trading off load time/size and performance
Multidimensional caching (MultiCache) provides fast query response for
frequently-requested information, by caching complex calculations in the
WhiteLight Server
The MultiCache is not a true MDDB. Rather, it can be considered an in-
memory cache that can be preloaded and/or loaded dynamically based on
user queries.
Processing
Use of native SQL to speed up data extraction
WhiteLight uses ODBC for connectivity to most RDBMSs. Native access is
only provided for Sybase.
Distribution of processing
WhiteLight supports partitioned databases across multiple servers; query
processing can be distributed across the appropriate partitions.
SMP support
WhiteLight Server supports multi-threaded SMP.
Customisation
Summary
1 2 3 4 5 6 7 8 9 10
The ACE development tools simplify the process of creating and deploying a
custom web-based analytical interface. It simplifies the application
development by the provision of re-usable application components written in
Java, which are capable of automatically finding and interacting with each
other. Applications can therefore be quickly assembled by dragging-and-
dropping pre-defined analytic components on a web page using standard web
layout tools.
The approach is best suited for business analysts, rather than complex
application development – the tools are object-based rather than object-
oriented. For more specialised applications, a development kit is provided for
creating new components.
Customisation
Option of a restricted interface
WhiteLight Analyst provides three types of interfaces that offer decreasing
levels of functionality for administrators, analysts and casual end users.
NetPublish also supports restricted interfaces.
Ease of producing EIS-style reports
WhiteLight provides a wizard-based briefing book construction toolkit for
publishing EIS-type interfaces. Alternatively, briefing books can be built via
drag-and-drop using web-authoring tools (supported by ACE).
Applications
Simple web applications
ACE provides a suite of Java-based components that can be dragged-and-
dropped onto any web page using standard authoring tools. These
components are also available from JavaScript programming interfaces.
Development environment
ACE supports a visual drag-and-drop environment for the development of
web-based analytic applications. A pre-built library of re-usable components
is provided. For specialised requirements, ACE also includes a Component
Development Kit (CDK), which allows developers or third parties to create
new components and add them to the ACE environment.
Use of third-party development tools
There is no integration with third-party client-server development
environments such as Visual Basic or PowerBuilder. However, ACE
developers can use any web-authoring tool that supports Java and HTML,
such as Microsoft FrontPage, Visual Interdev, NetObjects Fusion, and
certified Java development environments such as Microsoft Visual J, Sybase
Power J, Inprise Jbuilder and Symantec Visual Café.
Deployment
Platforms
Client
WhiteLight Workbench runs on Windows 95, Windows 98 and Windows NT
workstations. ACE assembled applications can operate in any JDK 1.1-
compliant web browser – including Microsoft Internet Explorer and Netscape
Navigator.
Server
The WhiteLight Analytic Application Server runs on Windows NT and
Solaris.
Data access
The WhiteLight Server can access Sybase (Adaptive Server/IQ), Oracle,
Informix ODS, Microsoft SQL Server, IBM DB2, Red Brick Warehouse, NCR
Teradata Microsoft SQL Server and other ODBC-accessible RDBMSs.
In addition to RDBMS support, the WhiteLight Server can access data from
spreadsheets, ERP applications, legacy applications, realtime data feeds and
web-based data sources, such as HTML and XML. Integration is through a
combination of OLE DB for OLAP, CORBA and MIME standards.
Standards
The WhiteLight Server supports Microsoft’s OLE DB for OLAP API as a data
provider.
Published benchmarks
WhiteLight does not have any published OLAP benchmarks.
Price structure
WhiteLight Analytic Application Server configurations, with a five-user
licence, start at around $70,000 – the WhiteLight Workbench-client and
ACE-client components are included in the cost. The Excel client add-in costs
around $200 per client.
The Portfolio Management and Adjusted Profitability modules are priced at
$100,000 each, and include implementation services.