KEMBAR78
Data Warehouse and OLAP | PDF | Data Warehouse | Database Index
0% found this document useful (0 votes)
259 views55 pages

Data Warehouse and OLAP

The document discusses data warehousing and OLAP technology for data mining. It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data to support management decision making. Key features of a data warehouse include being subject-oriented, integrated from multiple sources, time-variant with historical data, and non-volatile for querying not updating. Data warehouses use multidimensional data models like data cubes to organize and analyze data.

Uploaded by

rodrigolgn
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
259 views55 pages

Data Warehouse and OLAP

The document discusses data warehousing and OLAP technology for data mining. It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data to support management decision making. Key features of a data warehouse include being subject-oriented, integrated from multiple sources, time-variant with historical data, and non-volatile for querying not updating. Data warehouses use multidimensional data models like data cubes to organize and analyze data.

Uploaded by

rodrigolgn
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Ch.

2: Data Warehouse and OLAP


Technology for Data Mining
• What is a data warehouse?

• A multi-dimensional data model

• Data warehouse architecture

• Data warehouse implementation

• Further development of data cube technology

• From data warehousing to data mining

1
What is Data Warehouse?
• Defined in many different ways, but not rigorously.
– A decision support database that is maintained separately
from the organization’s operational database.
– Support information processing by providing a solid platform
of consolidated, historical data for analysis.
– W. H. Inmon — “A data warehouse is a subject-oriented,
integrated, time-variant, and nonvolatile collection of
data in support of management’s decision-making process.”

• Data warehousing:
– The process of constructing and using data warehouses

2
Data Warehouse — Key Features (1)
• Subject-Oriented:
– Organized around major subjects:
• E.g., customer, product, sales.
• Focusing on the modeling and analysis of data for decision
makers, not on daily operations or transaction processing.
• Provide a simple and concise view around particular subject
issues by excluding useless data in the decision support process.

• Integrated:
– Constructed from multiple, heterogeneous data sources:
• RDBs, flat files, on-line transaction records, …
– Applying data cleaning and data integration techniques.
• Ensure consistency in naming conventions, encoding structures,
attribute measures, etc. among different data sources
• E.g., Hotel price: currency, tax, breakfast covered, etc.
– When data is moved to the warehouse, it is converted.
3
Data Warehouse — Key Features (2)
• Time Variant:
– The time horizon for the data warehouse is significantly longer
than that of operational systems.
• Current value data ⇔ historical perspective info. (past 5-10 yr).
– Every key structure in the data warehouse:
• Contains an element of time [explicitly or implicitly],
• But the key of tuples may or may not contain “time element”.

• Non-Volatile:
– A physically separate store of data transformed (copied) from
the operational environment into one semantic data store.
• Operational data update does not occur in data warehouse.
• Require no transaction processing, recovery, and concurrency
control mechanisms. [⇐ on-line database operations]
– Requires only two operations in data accessing:
• initial loading of data and access of data. 4
Compared w. Heterogeneous DBMS
• Traditional heterogeneous DB integration:
– Build wrappers/mediators on top of heterogeneous DB.
• E.g., IBM Data Joiner and Informix DataBlade.

– Query-driven approach: [costly]


• When a query is posed to a client site,
a meta-dictionary is used to translate the query into queries
appropriate for individual heterogeneous sites involved, and
the results are integrated into a global answer set
• Complex information filtering, compete for resources.

• Data warehouse: update-driven, high performance


– Information from heterogeneous sources is integrated
in advance and stored in warehouses for direct query and
analysis. 5
Compared w. Operational DBMS
• OLTP (on-line transaction processing):
– Major task of traditional relational DBMS.
– Day-to-day operations: purchasing, inventory, banking,
manufacturing, payroll, registration, accounting, etc.

• OLAP (on-line analytical processing):


– Major task of data warehouse system.
– Data analysis and decision making.

• Distinct features (OLTP ⇔ OLAP): [ref. Table 2.1 p.43]

– User and system orientation: customer (query) ⇔ market.


– Data contents: current, detailed ⇔ historical, consolidated.
– Database design: ER + application ⇔ star + subject.
– View: current, local ⇔ evolutionary, integrated.
– Access patterns: update ⇔ read-only but complex queries.
6
OLTP vs. OLAP [ref. Table 2.1 p.43]

OLTP OLAP
users clerk, IT professional knowledge worker
function day to day operations decision support
DB design application-oriented subject-oriented
data current, up-to-date historical,
detailed, flat relational summarized, multidimensional
isolated integrated, consolidated
usage repetitive ad-hoc
access read/write lots of scans
index/hash on prim. key
unit of work short, simple transaction complex query
# records accessed tens millions
#users thousands hundreds
DB size 100MB-GB 100GB-TB
metric transaction throughput query throughput, response

7
Why Separate Data Warehouse?
• High performance for both systems: [asynchronous]
– DBMS ⇒ tuned for OLTP:
• access methods, indexing, concurrency control, recovery, etc.
– Warehouse ⇒ tuned for OLAP:
• complex OLAP queries, multidimensional view, consolidation.

• Different functions and different data:


– missing data: Decision support (DS) requires historical
data which operational DBs do not typically maintain.
– data consolidation: DS requires consolidation (aggregation,
summarization) of data from heterogeneous sources.
– data quality: different sources typically use inconsistent data
representations, codes & formats which have to be reconciled.

8
From Tables and Spreadsheets to Data Cubes
• Data warehouses utilize an n-dimensional data model
to view data in the form of a data cube.
– A data cube, such as sales, allows data to be modeled and
viewed in multiple dimensions.
• Dimensions: time, item, branch, location, supplier, …
– Dimension tables, each for a dimension.
• E.g., item (item_name, brand, type).
• E.g., time (day, week, month, quarter, year).
– Fact table contains numerical measures (say, dollars_sold,
units_sold, amount_budgeted, …) and keys to each of the
related dimension tables.
• In data warehousing literature,
– An n-D base cube is called a base cuboid.
– The top most 0-D cuboid, which holds the highest-level of
summarization, is called the apex cuboid.
– The lattice of cuboids forms a data cube. [group by]
9
Cube: A Lattice of Cuboids

all
C04 0-D (apex) cuboid

C14 time item location supplier


1-D cuboids

time,item time,location item,location location,supplier


4
C 2 2-D cuboids
time,supplier item,supplier

time,item,location time,location,supplier
C34 3-D cuboids
time,item,supplier item,location,supplier

C44 4-D (base) cuboid


time, item, location, supplier

10
Conceptual Modeling of Data Warehouses
• Modeling data warehouses: dimensions & measures
– Star schema: A fact table in the middle connected to a set of
dimension tables

– Snowflake schema: A refinement of star schema where


some dimensional hierarchy is normalized into a set of smaller
dimension tables, forming a shape similar to snowflake.

– Fact constellations, or Galaxy schema : Multiple fact


tables share dimension tables, viewed as a collection of stars,
therefore called galaxy schema or fact constellation.

11
Example of Star Schema
time
time_key item
day item_key
day_of_the_week Sales Fact Table item_name
month brand
quarter time_key type
year supplier_type
item_key
branch_key
branch location
location_key
branch_key location_key
units_sold
branch_name street
branch_type dollars_sold city
province_or_street
avg_sales
country

Measures

12
Example of Snowflake Schema
time
time_key item
day item_key supplier
day_of_the_week Sales Fact Table
item_name
month supplier_key
time_key brand
quarter supplier_type
type
year item_key supplier_key
branch_key
branch location_key
branch_key units_sold location
branch_name
dollars_sold location_key
branch_type city
street
avg_sales city_key city_key
city
Measures province_or_street
country

13
Example of Fact Constellation
time
Shipping Fact Table
time_key item
day item_key time_key
day_of_the_week Sales Fact Table item_name
month item_key
brand
quarter time_key type shipper_key
year supplier_type
item_key from_location
branch_key to_location
branch location
location_key dollars_cost
branch_key location_key
units_sold units_shipped
branch_name street
branch_type dollars_sold city
province_or_street
avg_sales
country
shipper
Measures shipper_key
shipper_name
location_key
shipper_type
14
A Data Mining Query Language, DMQL:
Language Primitives
• Cube Definition (Fact Table):
– define cube <cube_name> [<dimension_list>]:
<measure_list>

• Dimension Definition (Dimension Table):


– define dimension <dimension_name> as
(<attribute_or_subdimension_list>)

• Special Case (Shared Dimension Tables):


– First time as “cube definition”
– define dimension <dimension_name> as
<dimension_name_first_time> in cube
<cube_name_first_time>
15
Defining a Star Schema in DMQL
• Procedure:
– define cube sales_star [time, item, branch, location]:
units_sold = count(*), dollars_sold = sum(sales_in_dollars),
avg_sales = avg(sales_in_dollars)

– define dimension time as (time_key, day, day_of_week,


month, quarter, year)

– define dimension item as (item_key, item_name, brand, type,


supplier_type)

– define dimension branch as (branch_key, branch_name,


branch_type)

– define dimension location as (location_key, street, city,


province_or_state, country)
16
Defining a Snowflake Schema in DMQL
• Procedure:
– define cube sales_snowflake [time, item, branch, location]:
units_sold = count(*), dollars_sold = sum(sales_in_dollars),
avg_sales = avg(sales_in_dollars)

– define dimension time as (time_key, day, day_of_week,


month, quarter, year)

– define dimension item as (item_key, item_name, brand, type,


supplier(supplier_key, supplier_type))

– define dimension branch as (branch_key, branch_name,


branch_type)

– define dimension location as (location_key, street,


city(city_key, province_or_state, country))
17
Defining a Fact Constellation in DMQL
• define cube sales [time, item, branch, location]:
units_sold = count(*), dollars_sold = sum(sales_in_dollars), avg_sales
= avg(sales_in_dollars)
– define dimension time as (time_key, day, day_of_week, month, quarter, year)
– define dimension item as (item_key, item_name, brand, type, supplier_type)
– define dimension branch as (branch_key, branch_name, branch_type)
– define dimension location as (location_key, street, city, province_or_state,
country)

• define cube shipping [time, item, shipper, from_location,


to_location]: dollar_cost = sum(cost_in_dollars), unit_shipped =
count(*)
– define dimension time as time in cube sales
– define dimension item as item in cube sales
– define dimension shipper as (shipper_key, shipper_name, location as
location in cube sales, shipper_type)
– define dimension from_location as location in cube sales
– define dimension to_location as location in cube sales

18
Measures: Three Categories
• Distributive: if the result derived by applying the
function to n aggregate values is the same as that
derived by applying the function on all the data
without partitioning.
– E.g., count(), sum(), min(), max(). ⇐ aggregate

• Algebraic: if it can be computed by an algebraic


function with M arguments (where M is a bounded
integer), each of which is obtained by applying a
distributive aggregate function.
– E.g., avg(), min_N(), standard_deviation(). ⇐ derived

• Holistic: if there is no constant bound on the storage


size needed to describe a sub-aggregate.
– E.g., median(), mode(), rank(). ⇐ statistical
19
A Concept Hierarchy: Dimension (location)
Higher: more general concepts

all all

region Europe ... North_America

country Germany ... Spain Canada ... Mexico

city Frankfurt ... Vancouver ... Toronto

office L. Chan ... M. Wind


Lower: more specific concepts
20
View of Warehouses and Hierarchies

Specification of hierarchies
• Schema hierarchy
– day < {month < quarter;
week} < year
• Set_grouping hierarchy
– {1..10} < inexpensive
Year

Quarter

Month Week

Day
21
Multidimensional Data
• Sales volume as a function of product, month, and
region.
Dimensions: Product, Location, Time
n
gio

Industry Region Year


Re

Category Country Quarter


Product

Product City Month Week

Office Day

Hierarchical summarization paths


Month

22
A Sample Data Cube
Total annual sales
Date of TV in U.S.A.
1Qtr 2Qtr 3Qtr 4Qtr sum
t
uc TV
od
PC
Pr

VCR
sum

U.S.A
Country

Canada

Mexico

sum

23
Cuboids Corresponding to the Cube

all
C03 0-D(apex) cuboid

3
product date country
C1 1-D cuboids
product,date product,country date, country
3
C 2 2-D cuboids

C33 3-D(base) cuboid


product, date, country

24
Browsing a Data Cube

• Visualization
• OLAP capabilities
• Interactive manipulation
25
Typical OLAP Operations ref. Fig. 2.10 p.59

• Roll up (drill-up): summarize data


– by climbing up hierarchy or by dimension reduction.

• Drill down (roll down): reverse of roll-up


– from higher level summary to lower level summary or detailed
data, or introducing new dimensions.

• Slice and dice: project and select.


• Pivot (rotate):
– reorient the cube, visualization, 3D to series of 2D planes.

• Other operations
– drill across: involving (across) more than one fact table.
– drill through: through the bottom level of the cube to its
back-end relational tables (using SQL).
26
A Star-Net Query Model
Each circle (abstract level) is called a footprint.

Customer Orders
Shipping Method
Customer
CONTRACTS
AIR-EXPRESS

ORDER
TRUCK
PRODUCT LINE
Time Product
ANNUALY QTRLY DAILY PRODUCT ITEM PRODUCT GROUP
CITY
SALES PERSON
COUNTRY
DISTRICT
REGION
DIVISION
Location
Promotion Organization
27
Design of a Data Warehouse:
A Business Analysis Framework
• Four views regarding the design of a data warehouse:
– Top-down view:
• allows selection of the relevant information necessary for the data
warehouse.
– Data source view:
• exposes the information being captured, stored, and managed by
operational systems.
– Data warehouse view:
• consists of fact tables and dimension tables.
– Business query view:
• sees the perspectives of data in the warehouse from the view of
end-user.

28
Data Warehouse Design Process
• Top-down, bottom-up approaches or hybrid:
– Top-down: Starts with overall design and planning (mature).
– Bottom-up: Starts with experiments and prototypes (rapid).

• From software engineering point of view:


– Waterfall: structured and systematic analysis at each step
before proceeding to the next.
– Spiral: rapid generation of increasingly functional systems,
short turn around time, quick turn around.

• Typical data warehouse design process: Choose


– A business process to model, e.g., orders, invoices, …
– The grain (atomic level of data) of the business process.
– The dimensions for each fact table record.
– The measure that will populate each fact table record.
29
Multi-Tiered Architecture

Monitor
& OLAP Server
Metadata
other Integrator
sources Analysis
Extract Query
Transform Data
Operational Serve Reports
Load Warehouse
DBs Refresh Data mining

Data Marts

Data Sources Data Storage OLAP Engine Front-End Tools


1 2 3
30
1 Three Data Warehouse Models
• Enterprise warehouse:
– collects all of the information about subjects spanning the
entire organization.

• Data Mart:
– a subset of corporate-wide data that is of value to a specific
groups of users.
• Its scope is confined to specific, selected groups, such as
marketing data mart.
– Independent vs. dep. (directly from warehouse) data mart.

• Virtual warehouse:
– A set of views over operational databases.
– Only some summary views are materialized.

31
Data Warehouse Development:
A Recommended Approach

Multi-Tier Data
Warehouse
Distributed
Data Marts

Data Data Enterprise


Mart Mart Data
Warehouse

Model refinement Model refinement

Define a high-level corporate data model


32
2 OLAP Server Architectures
• Relational OLAP (ROLAP):
– Use relational or extended-relational DBMS to store and
manage warehouse data and OLAP middle ware to support
missing pieces.
– Include optimization of DBMS backend, implementation of
aggregation navigation logic, and additional tools and services.
– greater scalability.
• Multidimensional OLAP (MOLAP):
– Array-based multidimensional storage engine (sparse matrix
techniques).
– fast indexing to pre-computed summarized data.
• Hybrid OLAP (HOLAP):
– User flexibility, e.g., low level: relational, high-level: array.
• Specialized SQL servers:
– specialized support for SQL queries over star/snowflake
schemas. [Informix’s Red-brick]
33
Cube Operation
• Cube definition and computation in DMQL:
– define cube sales[item, city, year]: sum(sales_in_dollars)
– compute cube sales

• Transform it into a SQL-like language (with a new


operator cube by, introduced by Gray et al.’96):
– SELECT item, city, year, SUM(amount)
FROM SALES ()
CUBE BY item, city, year
• Need to compute these (city) (item) (year)
Group-By’s:
– (date, product, customer),
(date,product), (date, (city, item) (city, year) (item, year)
n
2 customer), (product,
customer), (date), (product),
(city, item, year)
(customer), (). 34
Efficient Data Cube Computation
• Data cube can be viewed as a lattice of cuboids:
– The bottom-most cuboid is the base cuboid.
– The top-most cuboid (apex) contains only one cell.
– How many cuboids in an n-dimensional cube with Li levels?

T = ∏ i =1 ( Li + 1) ⇒ 2n , if Li = 1.
n

• Materialization of data cube:


– Materialize every (cuboid) (full materialization),
none (no materialization), or some (partial materialization).
– Selection of which cuboids to materialize.
• Based on size, sharing, access frequency, etc.

35
Cube Computation: ROLAP-Based Method (1)
• Efficient cube computation methods:
– ROLAP-based cubing algorithms (Agarwal et al’96)
– Array-based cubing algorithm (Zhao et al’97)
– Bottom-up computation method (Bayer & Ramarkrishnan’99)

• ROLAP-based cubing algorithms:


– Sorting, hashing, and grouping operations are applied to the
dimension attributes to reorder and cluster related tuples.
– Grouping is performed on some sub-aggregates as a “partial
grouping step”.
– Aggregates may be computed from previously computed
aggregates, rather than from the base fact table.

36
Cube Computation: ROLAP-Based Method (2)
• This is not in the textbook but in a research paper
• Hash/sort based methods (Agarwal et. al. VLDB’96)
– Smallest-parent: computing a cuboid from the smallest
cuboid previously computed.
– Cache-results: caching results of a cuboid from which other
cuboids are computed to reduce disk I/Os.
– Amortize-scans: computing as many as possible cuboids at
the same time to amortize disk reads.
– Share-sorts: sharing sorting costs cross multiple cuboids
when sort-based method is used.
– Share-partitions: sharing the partitioning cost cross multiple
cuboids when hash-based algorithms are used.

37
Multi-way Array Aggregation for Cube
Computation
• Partition arrays into chunks (a small sub-cube fit in memory).
• Compressed sparse array addressing: (chunk_id, offset)
• Compute aggregates in “multiway” by visiting cube cells in the
order which minimizes the # of times to re-visit each cell,
thereby reducing memory access and storage cost.

C c3 61
c2 45
62 63 64
46 47 48
c1 29 30 31 32
c0 What is the best
b3 B13 14 15 16 60
44 traversing order
28 56
b2 9
B 40 to do multi-way
24 52
b1 5 36
1 2 3 4
20 aggregation?
b0
a0 a1 a2 a3 ordering
A 38
Multi-way Array Aggregation for Cube
Computation
AC

C c3 61 62 63 64
BC c2 45 46 47 48
c1 29 30 31 32
c0
B13 14 15 16 60
b3 44
B b2 28 56
9
40
24 52
b1 5
36
20
b0 1 2 3 4
a0 a1 a2 a3
A
b0c0 of BC

AB
39
Multi-way Array Aggregation for Cube
Computation
AC
a0b0c0 ⇒ a0b0, b0c0, a0c0.

C c3 61
c2 45
62 63 64
BC 46 47 48
c1 29 30 31 32
c0
B13 14 15 16 60
b3 44
B b2 28 56
9
40
24 52
b1 5
36
20
b0 1 2 3 4
a0 a1 a2 a3 A=40
A B=400
C=4000
⇒ AB=16000
⇒ AC=160000
⇒ BC=1600000
AB
40
Multi-Way Array Aggregation for Cube
Computation (cont’d)
• Method: the planes should be sorted and computed
according to their size in ascending order.
– See the details of Example 2.12 (pp. 75-78)
– Idea: keep the smallest plane in the main memory, fetch and
compute only one chunk at a time for the largest plane.

• Limitation of the method: computing well only for a


small number of dimensions
– If there are a large number of dimensions, “bottom-up
computation” and iceberg cube computation methods can be
explored.

Store only partitions with aggregate value > threshold


41
Indexing OLAP Data: Bitmap Index
• Index on a particular column
• Each value in the column has a bit vector: bit-op is fast
• The length of the bit vector: # of records in the base table
• The i-th bit is set if the i-th row of the base table has the value
for the indexed column
• not suitable for high cardinality domains

Base table Index on Region Index on Type


Cust Region Type RecIDAsia Europe America RecID Retail Dealer
C1 Asia Retail 1 1 0 0 1 1 0
C2 Europe Dealer 2 0 1 0 2 0 1
C3 Asia Dealer 3 1 0 0 3 0 1
C4 America Retail 4 0 0 1 4 1 0
C5 Europe Dealer 5 0 1 0 5 0 1

42
Indexing OLAP Data: Join Indices
• Join index: JI(R-id, S-id) where R (R-id, …)
 S (S-id, …)
fact table
• Traditional indices map the values to a list
of record ids
– It materializes relational join in JI file and
speeds up relational join — a rather costly
operation
• In data warehouses, join index relates the
values of the dimensions of a start schema
to rows in the fact table.
– E.g. fact table: Sales and two dimensions
city and product
• A join index on city maintains for each
distinct city a list of R-IDs of the tuples
recording the Sales in the city
– Join indices can span multiple dimensions.
43
Efficient Processing OLAP Queries
• Determine which operations should be performed on
the available cuboids:
– transform drill, roll, etc. into corresponding SQL and/or OLAP
operations, e.g., dice = selection + projection

• Determine to which materialized cuboids the relevant


operations should be applied.

• Exploring indexing structures and compressed vs.


dense array structures in MOLAP.

44
Metadata Repository
• Meta data is the data defining warehouse objects.
It has the following kinds:
– Description of the structure of the warehouse.
• schema, view, dimensions, hierarchies, derived data definitions,
data mart locations and contents.
– Operational meta-data.
• data lineage (history of migrated data and transformation path),
currency of data (active, archived, or purged), monitoring
information (usage statistics, error reports, audit trails).
– The algorithms used for summarization.
– The mapping from operational environment to the data
warehouse.
– Data related to system performance.
• warehouse schema, view and derived data definitions.
– Business data.
• business terms & definitions, ownership of data, charging policies.
45
Data Warehouse Back-End Tools and Utilities
• Data extraction:
– get data from multiple, heterogeneous, and external sources.
• Data cleaning:
– detect errors in the data and rectify them when possible.
• Data transformation:
– convert data from legacy or host format to warehouse format.
• Load:
– sort, summarize, consolidate, compute views, check integrity,
and build indices and partitions.
• Refresh
– propagate the updates from the data sources to the
warehouse.

46
Discovery-Driven Exploration of Data Cubes

• Hypothesis-driven: exploration by user, huge


search space.
• Discovery-driven: (Sarawagi et al.’98)
– pre-compute measures indicating exceptions, guide user in
the data analysis, at all levels of aggregation.
– Exception: significantly different from the value anticipated,
based on a statistical model.
– Visual cues such as background color are used to reflect the
degree of exception of each cell.
– Computation of exception indicator (modeling fitting and
computing SelfExp, InExp, and PathExp values) can be
overlapped with cube construction.
47
Examples: Discovery-Driven Data Cubes
more PathExp

Drill
down
high InExp
high SelfExp

high InExp

Drill down

48
Complex Aggregation at Multiple Granularities:
Multi-Feature Cubes
• Multi-feature cubes (Ross, et al. 1998): Compute
complex queries involving multiple dependent
aggregates at multiple granularities.
– Ex. Grouping by all subsets of {item, region, month}, find
the maximum price in 1997 for each group, and the total sales
among all maximum price tuples.
• select item, region, month, max(price), sum(R.sales)
from purchases Dependent!
where year = 1997
grouping variable of group gi
cube by item, region, month: R
such that R.price = max(price)

– Continuing the last example, among the max price tuples,


find the min and max shelf life, and
find the fraction of the total sales due to tuples that have min
shelf life within the set of all max price tuples.
49
Data Warehouse Usage
• Three kinds of data warehouse applications.
– Information processing:
• supports querying, basic statistical analysis, and reporting using
crosstabs, tables, charts and graphs.
– Analytical processing:
• multidimensional analysis of data warehouse data.
• supports basic OLAP operations, slice-dice, drilling, pivoting.
– Data mining:
• knowledge discovery from hidden patterns.
• supports associations, constructing analytical models, performing
classification and prediction, and presenting the mining results
using visualization tools.
9 OLAP focuses on interactive data aggregation tools;
data mining emphasizes more automation, deeper analysis.
50
From On-Line Analytical Processing to On Line
Analytical Mining (OLAM)
• Why online analytical mining?
– High quality of data in data warehouses.
• DW contains integrated, consistent, cleaned data.
– Available information processing structure surrounding data
warehouses.
• ODBC, OLEDB, Web accessing, service facilities, reporting and
OLAP tools.
– OLAP-based exploratory data analysis
• mining with drilling, dicing, pivoting, etc.
– On-line selection of data mining functions.
• integration and swapping of multiple mining functions, algorithms,
and tasks.

• Architecture of OLAM (OLAP Mining):


51
An OLAM Architecture (Layering)

Mining query Mining result Layer4


User Interface
User GUI API

Layer3
OLAM Engine OLAP Engine
OLAP/OLAM
Data Cube API

Layer2
Meta MDDB
MDDB Data

Database API
Filtering&Integration Filtering Layer1
Data cleaning Data
Data
Databases Repository
Data integration Warehouse
52
Summary
• Data warehouse:
– A subject-oriented, integrated, time-variant, & nonvolatile collection
of data in support of management’s decision-making process.

• A multi-dimensional model of a data warehouse:


– Star schema, snowflake schema, fact constellations.
– A data cube consists of dimensions & measures.
• OLAP operations: drilling, rolling, slicing, dicing and pivoting.
• OLAP servers: ROLAP, MOLAP, HOLAP.
• Efficient computation of data cubes:
– Partial vs. full vs. no materialization.
– Multiway array aggregation.
– Bitmap index and join index implementations.
• Further development of data cube technology:
– Discovery-drive and multi-feature cubes.
– From OLAP to OLAM (on-line analytical mining).
53
References (I)
• S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R.
Ramakrishnan, and S. Sarawagi. On the computation of multidimensional
aggregates. In Proc. 1996 VLDB, 506-521, Bombay, India, Sept. 1996.
• D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in
data warehouses. In Proc. 1997 ACM-SIGMOD, 417-427, Arizona, May 1997.
• R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace
clustering of high dimensional data for data mining applications. In Proc. 1998
ACM SIGMOD, 94-105, Seattle, Washington, June 1998.
• R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. In
Proc. 1997 Int. Conf. Data Engineering, 232-243, Birmingham, England, April ‘97.
• K. Beyer and R. Ramakrishnan. Bottom-Up Computation of Sparse and Iceberg
CUBEs. In Proc. 1999 ACM-SIGMOD, 359-370, Philadelphia, PA, June 1999.
• S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP
technology. ACM SIGMOD Record, 26:65-74, 1997.
• OLAP council. MDAPI specification version 2.0. In
http://www.olapcouncil.org/research/apily.htm, 1998.
• J. Gray, S. Chaudhuri, A. Bosworth, A. Layman, D. Reichart, M. Venkatrao, F.
Pellow, and H. Pirahesh. Data cube: A relational aggregation operator
generalizing group-by, cross-tab and sub-totals. Data Mining and Knowledge
Discovery, 1:29-54, 1997.
54
References (II)
• V. Harinarayan, A. Rajaraman, and J. D. Ullman. Implementing data cubes
efficiently. In Proc. 1996 ACM-SIGMOD Int. Conf. Management of Data, pages
205-216, Montreal, Canada, June 1996.
• Microsoft. OLEDB for OLAP programmer's reference version 1.0. In
http://www.microsoft.com/data/oledb/olap, 1998.
• K. Ross and D. Srivastava. Fast computation of sparse datacubes. In Proc. 1997
Int. Conf. Very Large Data Bases, 116-125, Athens, Greece, Aug. 1997.
• K. A. Ross, D. Srivastava, and D. Chatziantoniou. Complex aggregation at
multiple granularities. In Proc. Int. Conf. of Extending Database Technology
(EDBT'98), 263-277, Valencia, Spain, March 1998.
• S. Sarawagi, R. Agrawal, and N. Megiddo. Discovery-driven exploration of OLAP
data cubes. In Proc. Int. Conf. of Extending Database Technology (EDBT'98),
pages 168-182, Valencia, Spain, March 1998.
• E. Thomsen. OLAP Solutions: Building Multidimensional Information Systems.
John Wiley & Sons, 1997.
• Y. Zhao, P. M. Deshpande, and J. F. Naughton. An array-based algorithm for
simultaneous multidimensional aggregates. In Proc. 1997 ACM-SIGMOD Int. Conf.
Management of Data, 159-170, Tucson, Arizona, May 1997.

55

You might also like