Académique Documents
Professionnel Documents
Culture Documents
Data Warehousing
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 300
Introduction
Increasingly, organizations are analyzing
current and historical data to identify useful
patterns and support business strategies
(Decision Support).
Emphasis is on complex, interactive,
exploratory analysis of very large datasets
created by integrating data from across all
parts of an enterprise; data is fairly static.
Contrast such On-Line Analytic Processing
(OLAP) with traditional On-line Transaction
Processing (OLTP): mostly long queries, instead
of short update transactions.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 301
DBS for Decision Support
Data Warehouse: Consolidate data from many
sources in one large repository.
Loading, periodic synchronization of replicas.
Semantic integration.
OLAP:
Complex SQL queries and views.
Queries based on “multidimensional” view of data
and spreadsheet-style operations.
Interactive and “online” (manual) analysis.
Data Mining: Automatic discovery of
interesting trends and other patterns.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 302
Data Warehousing
A Data Warehouse is a subject oriented,
integrated, time variant, non volatile collection
of data for the purpose of decision support.
Integrates data from several operational
(OLTP) databases.
Keeps (relevant part of the) history of the data.
Views data at a more abstract level than OLTP
systems (aggregate over many detail records).
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 303
Data Warehouse Architecture
EXTERNAL DATA
SOURCES EXTRACT Metadata
INTEGRATE DATA Repository
TRANSFORM WAREHOUSE
LOAD /
REFRESH
SUPPORTS
DATA
OLAP MINING
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 304
Data Warehousing
Integrated data spanning long time periods,
often augmented with summary information.
Data warehouse keeps the history. Therefore,
several gigabytes to terabytes common.
Interactive response times expected for
complex queries.
On the other hand, ad-hoc updates uncommon.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 305
Data Warehousing Issues
Semantic Integration: When getting data from
multiple sources, must eliminate mismatches,
e.g., different currencies, DB schemas.
Heterogeneous Sources: Must access data from
a variety of source formats and repositories.
Replication capabilities can be exploited here.
Load, Refresh, Purge: Must load data,
periodically refresh it, and purge too-old data.
Metadata Management: Must keep track of
source, loading time, and other information for
all data in the warehouse.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 306
Multidimensional Data Model
Consists of a collection of dimensions
(independent variables) and (numeric)
measures (dependent variables).
Each entry (cell) aggregates the value(s) of the
measure(s) for all records that fall into that cell,
i.e. for all records that in each dimension have
attribute values corresponding to the value of
the cell in this dimension.
Example: dimensions Product (pid), Location
(locid), and Time (timeid) and measure Sales.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 307
Multidimensional Data Model
timeid
locid
sales
Tabular representation
pid
11 1 1 25
Multidimensional representation
11 2 1 8
11 3 1 15
12 1 1 30
12 2 1 20 11 12 13
Slice locid=1
12 3 1 50 8 10 10
pid
is shown
13 1 1 8 30 20 50
13 2 1 10 25 8 15
13 3 1 10 locid
1 2 3
11 1 2 35 timeid
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 308
Multidimensional Data Model
For each dimension, the set of values can be
organized in a concept hierarchy (subset
relationship), e.g.
PRODUCT TIME LOCATION
year
quarter country
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 311
OLAP Queries
Drill-down: The inverse of roll-up.
E.g., given total sales by state, can drill-down to get
total sales by city.
E.g., can also drill-down on different dimension to
get total sales by product for each state.
Pivoting: Aggregation on selected dimensions.
E.g., pivoting on Location and Time WI CA Total
yields this cross-tabulation: 1995 63 81 144
1996 38 107 145
Slicing and Dicing: Equality
and range selections on one 1997 75 35 110
or more dimensions. Total 176 223 339
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 312
Comparison with SQL Queries
The cross-tabulation obtained by pivoting can
also be computed using a collection of SQL
queries, e.g.
SELECT SUM(S.sales)
FROM Sales S, Times T, Locations L
WHERE S.timeid=T.timeid AND S.timeid=L.timeid
GROUP BY T.year, L.state
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 314
The Cube Operator
An entry of a data cube is called a cell.
The number of cells of a datacube with d
dimensions is
d
∏ (| Domain | +1)
i =1
i
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 315
The Cube Operator
The Cube Operator computes the measures for
all cells (evaluates all possible GROUP BY
queries) at the same time.
It can be much more efficiently processed than
the set of all corresponding (independent) SQL
GROUP BY queries.
Observation: The results of more generalized
queries (with fewer GROUP BY attributes) can
be derived from more specialized queries (with
more GROUP BY attributes) by aggregating
over the irrelevant GROUP BY attributes.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 316
The Cube Operator
Process more specialised queries first and,
based on their results, determine the outcome
of more generalised queries.
Significant reduction of I/O cost, since
intermediate results are much smaller than
original (fact) table.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 317
The Cube Operator
Lattice of GROUP-BY queries of a CUBE query
w.r.t. derivability of the results
Example
{}
{A,B,. . .}: set of GROUP BY attributes, X Y: Y derivable from X
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 318
Implementation Issues
In the following, adopting the ROLAP
implementation.
Fact table normalized (redundancy free).
Dimension tables un-normalized.
Dimension tables are small;
updates/inserts/deletes are rare. So, anomalies
less important than query performance.
This kind of schema is very common in OLAP
applications, and is called a star schema;
computing the join of all these relations is
called a star join.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 319
Implementation Issues
Example star schema
TIMES
timeid date week month quarter year holiday_flag
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 320
Bitmap Indexes
New indexing techniques: Bitmap indexes,
Join indexes, array representations,
compression, precomputation of aggregations,
etc.
Example Bitmap index:
sex custid name sex rating rating
Bit-vector: M
F
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 321
Bitmap Indexes
Selections can be processed using (efficient!)
bit-vector operations.
Example 1: Find all male customers
sex custid name sex rating rating
10 112 Joe M 3 00100
10 115 Ram M 5 00001
01 119 Sue F 5 00001
10 112 Woo M 4 00010
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 323
Join Indexes
Problem: Number of join indexes can grow
rapidly.
In order to efficiently support all possible
selections in a data cube, you need one join
index for each subset of the set of dimensions.
E.g, one join index each for
[s,p,t,l], [s,p,t], [s,p,l], [s,t,l], [s,p], [s,t], [s,l]
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 324
Bitmapped Join Indexes
A variation of join indexes addresses this
problem, using the concept of Bitmap indexes.
For each attribute of each dimension table with
an additional selection (e.g., country), build a
Bitmap index.
Index contains, e.g., entry [c,s] if a dimension
table tuple with value c in the selection column
joins with a Sales tuple with sid s. Note that s
denotes the compound key of the fact table,
e.g. [pid, timeid, locid].
The Bitmap index version is especially efficient
(Bitmapped Join Index).
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 325
Bitmapped Join Indexes
TIMES
timeid date week month quarter year holiday_flag
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 332
Online Aggregation
Consider an aggregate query, e.g., finding the
average sales by state.
If we do not have a corresponding (materialized)
data cube, processing this query from scratch
can be very expensive.
In general, we have to scan the entire fact table.
But the user expects interactive response time.
An approximate result may be acceptable to the
user.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 333
Online Aggregation
Can we provide the user with some approximate
results before the exact average is computed for
all states?
Can show the current “running average” for each
state as the computation proceeds.
Even better, we can use statistical techniques and
sample tuples to aggregate instead of simply
scanning the aggregated table.
E.g., we can provide bounds such as “the
average for Wisconsin is 2000 ± 102 with 95%
probability“.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 334
Summary
Decision support is an emerging, rapidly
growing subarea of database systems.
Involves the creation of large, consolidated data
repositories called data warehouses.
Warehouses exploited using sophisticated
analysis techniques: complex SQL queries and
OLAP “multidimensional” queries (or automatic
data mining methods).
New techniques for database design, indexing,
view maintenance, and interactive (online)
querying need to be developed.
CMPT 354, Simon Fraser University, Fall 2005, Martin Ester 335