Data Mining:
Concepts and Techniques
(3rd ed.)
— Chapter 3 —
Jiawei Han, Micheline Kamber, and Jian Pei
University of Illinois at Urbana-Champaign &
Simon Fraser University
©2011 Han, Kamber & Pei. All rights reserved.
1
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
2
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
3
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
4
Data Cleaning
Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g.,
instrument faulty, human or computer error, transmission error
incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
e.g., Occupation=“ ” (missing data)
noisy: containing noise, errors, or outliers
e.g., Salary=“−10” (an error)
inconsistent: containing discrepancies in codes or names, e.g.,
Age=“42”, Birthday=“03/07/2010”
Was rating “1, 2, 3”, now rating “A, B, C”
discrepancy between duplicate records
Intentional (e.g., disguised missing data)
Jan. 1 as everyone’s birthday?
5
Incomplete (Missing) Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the
time of entry
not register history or changes of the data
Missing data may need to be inferred
6
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing
(when doing classification)—not effective when the % of
missing values per attribute varies considerably
Fill in the missing value manually: tedious + infeasible?
Fill in it automatically with
a global constant : e.g., “unknown”, a new class?!
the attribute mean
the attribute mean for all samples belonging to the same
class: smarter
the most probable value: inference-based such as Bayesian
formula or decision tree
7
LAB: replace missing data
Open Weka -> explorer
Choose Breast_cancer.arff
Click node_caps attribute.
Choose filters -> unsupervised-> attributes
Choose “ReplaceMissingValues” filter
8
Noisy Data
Noise: random error or variance in a measured variable
Incorrect attribute values may be due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which require data cleaning
duplicate records
incomplete data
inconsistent data
9
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-frequency) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g.,
deal with possible outliers)
10
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
11
Data Integration
Data integration:
Combines data from multiple sources into a coherent store
Schema integration: e.g., A.cust-id B.cust-#
Integrate metadata from different sources
Entity identification problem:
Identify real world entities from multiple data sources, e.g., Bill Clinton
= William Clinton
Detecting and resolving data value conflicts
For the same real world entity, attribute values from different sources
are different
Possible reasons: different representations, different scales, e.g.,
metric vs. British units
12
Data Redundancy
An attribute (such as annual revenue, for instance) may be
redundant if it can be “derived” from another attribute or set
of attributes.
Some redundancies can be detected by correlation analysis.
Given two attributes, such analysis can measure how strongly
one attribute implies the other, based on the available data
For nominal data, we use the Χ2 (chi-square) test. For
numeric attributes, we can use the correlation coefficient
and covariance, both of which access how one attribute’s
values vary from those of another
13
Correlation Analysis (Nominal Data)
Χ2 (chi-square) test
(Observed Expected ) 2
2
Expected
The larger the Χ2 value, the more likely the variables are
related
The cells that contribute the most to the Χ2 value are
those whose actual count is very different from the
expected count
Correlation does not imply causality
# of hospitals and # of car-theft in a city are correlated
Both are causally linked to the third variable: population
14
Chi-Square Calculation: An Example
Play chess Not play chess Sum (row)
Like science fiction 250(90) 200(360) 450
Not like science fiction 50(210) 1000(840) 1050
Sum(col.) 300 1200 1500
Χ2 (chi-square) calculation (numbers in parenthesis are
expected counts calculated based on the data distribution
in the two categories)
(250 90) 2 (50 210) 2 (200 360) 2 (1000 840) 2
2
507.93
90 210 360 840
It shows that like_science_fiction and play_chess are
correlated in the group
15
Covariance (Numeric Data)
Covariance is similar to correlation
Correlation coefficient:
where n is the number of tuples, and are the respective mean or
A B the respective standard
expected values of A and B, σA and σB are
deviation of A and B.
Positive covariance: If CovA,B > 0, then A and B both tend to be larger than
their expected values.
Negative covariance: If CovA,B < 0 then if A is larger than its expected
value, B is likely to be smaller than its expected value.
Independence: CovA,B = 0 but the converse is not true:
Some pairs of random variables may have a covariance of 0 but are not
independent. Only under some additional assumptions (e.g., the data follow
multivariate normal distributions) does a covariance of 0 imply independence
16
Co-Variance: An Example
It can be simplified in computation as
Suppose two stocks A and B have the following values in one week:
(2, 5), (3, 8), (5, 10), (4, 11), (6, 14).
Question: If the stocks are affected by the same industry trends, will
their prices rise or fall together?
E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4
E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6
Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
Thus, A and B rise together since Cov(A, B) > 0.
Visually Evaluating Correlation
Scatter plots
showing the
similarity from
–1 to 1.
18
LAB: Visual correlation
Open Weka -> explorer
Choose iris dataset
Click visualize tab
See the correlation between petal_length and
petal_width
Click the figure (graph) to open more detailed
graph.
19
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
20
20
Data Reduction Strategies
Data reduction: Obtain a reduced representation of the data set that
is much smaller in volume but yet produces the same (or almost the
same) analytical results
Why data reduction? — A database/data warehouse may store
terabytes of data. Complex data analysis may take a very long time to
run on the complete data set.
Data reduction strategies
Dimensionality reduction, e.g., remove unimportant attributes
Wavelet transforms
Principal Components Analysis (PCA)
Feature subset selection, feature creation
Numerosity reduction (some simply call it: Data Reduction)
Regression and Log-Linear Models
Histograms, clustering, sampling
Data cube aggregation
Data compression
21
Data Reduction 1: Dimensionality Reduction
Curse of dimensionality
When dimensionality increases, data becomes increasingly sparse
Density and distance between points, which is critical to clustering, outlier
analysis, becomes less meaningful
The possible combinations of subspaces will grow exponentially
Dimensionality reduction
Avoid the curse of dimensionality
Help eliminate irrelevant features and reduce noise
Reduce time and space required in data mining
Allow easier visualization
Dimensionality reduction techniques
Wavelet transforms
Principal Component Analysis
Supervised and nonlinear techniques (e.g., feature selection)
22
What Is Wavelet Transform?
Decomposes a signal into
different frequency subbands
Applicable to n-
dimensional signals
Data are transformed to
preserve relative distance
between objects at different
levels of resolution
Allow natural clusters to
become more distinguishable
Used for image compression
23
Wavelet Transformation
Haar2 Daubechie4
Discrete wavelet transform (DWT) for linear signal
processing, multi-resolution analysis
Compressed approximation: store only a small fraction of
the strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better lossy
compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0’s, when
necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
24
Wavelet Decomposition
Wavelets: A math tool for space-efficient hierarchical
decomposition of functions
S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ =
[23/4, -11/4, 1/2, 0, 0, -1, -1, 0]
Compression: many small detail coefficients can be
replaced by 0’s, and only the significant coefficients are
retained
25
Principal Component Analysis (PCA)
Find a projection that captures the largest amount of variation in data
The original data are projected onto a much smaller space, resulting in
dimensionality reduction. We find the eigenvectors of the covariance
matrix, and these eigenvectors define the new space
x2
Good explanation of PCA for dummies is found here:
x1
https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues 26
Principal Component Analysis (Steps)
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can be best used to represent data
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal
component vectors
The principal components are sorted in order of decreasing
“significance” or strength
Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance (i.e., using the strongest principal components, it is
possible to reconstruct a good approximation of the original data)
Works for numeric data only
27
LAB: Principal component
Open Weka -> explorer
Choose cpu dataset (numerical)
Click filter and select
(filters.unsupervised.attributes.principalComponents)
Click apply
Remove the less important components. You can
see how much variance (std) need to be kept.
https://www.youtube.com/watch?v=zuoMjUAPihA
28
Data Reduction 2: Numerosity Reduction
Reduce data volume by choosing alternative, smaller
forms of data representation
Parametric methods (e.g., regression)
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard
the data (except possible outliers)
Ex.: Log-linear models—obtain value at a point in m-
D space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling, …
29
y
Regression Analysis
Y1
Regression analysis: A collective name for
techniques for the modeling and analysis Y1’
y=x+1
of numerical data consisting of values of a
dependent variable (also called
response variable or measurement) and X1 x
of one or more independent variables (aka.
explanatory variables or predictors) Used for prediction
The parameters are estimated so as to (including forecasting of
give a "best fit" of the data time-series data), inference,
hypothesis testing, and
Most commonly the best fit is evaluated by
modeling of causal
using the least squares method, but relationships
other criteria have also been used
30
LAB: Linear Regression
Open Weka -> explorer
Choose cpu dataset (numerical)
Click classify tab, select
classifiers>functions>LinearRegression (not simple
linear regression)
Click Start
Explain the resulting model
class =0.0491 * MYCT + 0.0152 * MMIN +
0.0056 * MMAX + 0.6298 * CACH +
1.4599 * CHMAX + -56.075
https://view.vzaar.com/9187901/video/hd
31
Histogram
Histograms. The following data are a list of
AllElectronics prices for commonly sold
items (rounded to the nearest dollar). The
numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8,
8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15,
15, 15, 18, 18, 18, 18, 18,18, 18, 18, 20, 20, 20,
20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25,
28, 28, 30,30, 30.
32
Histogram
“How are the buckets
determined and the attribute
values partitioned?” There are
several partitioning rules, one of
the most important histogram
processing tool is:
Equal-width: In an equal-width
histogram, the width of each
bucket range is uniform (e.g.,
the width of $10 for the buckets
in down Figure).
33
Sampling
Sampling: obtaining a small sample s to represent the whole
data set N
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Key principle: Choose a representative subset of the data
Simple random sampling may have very poor performance
in the presence of skew
Develop adaptive sampling methods, e.g., stratified
sampling:
Note: Sampling may not reduce database I/Os (page at a
time)
34
Types of Sampling
Simple random sampling
There is an equal probability of selecting any particular
item
Sampling without replacement
Once an object is selected, it is removed from the
population
Sampling with replacement
A selected object is not removed from the population
Stratified sampling:
Partition the data set, and draw samples from each
partition (proportionally, i.e., approximately the same
percentage of the data)
Used in conjunction with skewed data
35
Sampling: With or without Replacement
W O R
SRS le random
i m p h ou t
( s e wi t
l
samp ment)
pl a c e
re
SRSW
R
Raw Data
36
Sampling: Cluster or Stratified Sampling
Raw Data Cluster/Stratified Sample
37
LAB: stratified samples
Open weather.numerical dataset
Open supervised.instances.StratifiedRemoveFolds filter
Configure the filter as shown in fig.
38
Data Compression
Original Data Compressed
Data
lossless
ss y
lo
Original Data
Approximated
39
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
40
Data Transformation Methods
A function that maps the entire set of values of a given attribute to a new
set of replacement values s.t. each old value can be identified with one of
the new values
Methods
1. Attribute/feature construction
1. New attributes constructed from the given ones
2. Aggregation: Summarization, data cube construction
3. Normalization: Scaled to fall within a smaller, specified range
1. min-max normalization
2. z-score normalization
3. normalization by decimal scaling
4. Discretization: Concept hierarchy climbing
5. Smoothing: Remove noise from data
41
1,2: Feature Construction and Aggregation
Feature Construction
Combining two or more attributes into a single
attribute (example) combining first_name and last_name
attribute into single column called full_name. some times you want to
process data from different sources, one of data uses two column for the
name and other use one
Combining two or more attribute values into a
single attribute value, Combining height and weight into BMI
(Body Mass Index)
Aggregation
Change of scale, Cities aggregated into regions, states, countries,
etc
More “stable” data , Aggregated data tends to have less
variability
3. Normalization
Min-max normalization: to [new_minA, new_maxA]
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
Ex. Let income range $12,000 to $98,000 normalized to [0.0,
73,600 12,000
(1.0 0) 0 0.716
1.0]. Then $73,000 is mapped to 98,000 12,000
Z-score normalization (μ: mean, σ: standard deviation):
v A
v'
A
73,600 54,000
1.225
Ex. Let μ = 54,000, σ = 16,000. Then 16,000
Normalization by decimal scaling
v
v' j Where j is the smallest integer such that Max(|ν’|) < 1
10
43
LAB: Normalize
Open Weka -> explorer
Choose weather.numerical dataset
Click filter and select
(filters.unsupervised.attributes.Normalize)
Click on the filter to change the normalization
scale if needed
Click apply
Check the new data
44
4. Label Discretization
Three types of attributes
Nominal—values from an unordered set, e.g., color, profession
Ordinal—values from an ordered set, e.g., military or academic
rank
Numeric—real numbers, e.g., integer or real numbers
Discretization: Divide the range of a continuous attribute into intervals
Interval labels can then be used to replace actual data values
Reduce data size by discretization
Split (top-down) vs. merge (bottom-up)
Prepare for further analysis, e.g., classification
Example: Age Discretization; Young 18-29; Career 30-40; Mid-
Life 41-55; Empty-Nester 56-69; Senior 70+
45
LAB: Label Discretize
Open Weka -> explorer
Choose weather.numerical dataset
Click filter and select
(filters.unsupervised.attributes.Descritize)
attributeindices=2 , 2 represents the temperature,
number of bins= 3, precision = 2
Click on the filter to change the descritization
scale if needed
Click apply
Check the new data
46
Data Discretization Methods
Typical methods: All the methods can be applied
recursively
Binning
Top-down split, unsupervised
Histogram analysis
Top-down split, unsupervised
Clustering analysis (unsupervised, top-down split or
bottom-up merge)
Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)
47
Data Discretization: Binning
Equal-width (distance) partitioning
Divides the range into N intervals of equal size: uniform grid
if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N.
The most straightforward, but outliers may dominate presentation
Skewed data is not handled well
Equal-depth (frequency) partitioning
Divides the range into N intervals, each containing approximately
same number of samples
Good data scaling
Managing categorical attributes can be tricky
48
5. Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
49
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
50
Exercises
2.4 In real-world data, tuples with missing values for some attributes
are a common occurrence. Describe various methods for handling this
problem.
2.5 Given the following data (in increasing order) for the attribute age:
13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25, 25, 25, 30, 33, 33, 35,
35, 35, 35, 36, 40, 45, 46,52, 70.
(a) Use smoothing by bin means to smooth these data, using a bin
depth of 3. Illustrate your steps. Comment on the effect of this
technique for the given data.
(b) What other methods are there for data smoothing?
2.6 Use these methods to normalize the following group of data:200,
300, 400, 600,1000
(a) min-max normalization by setting min D 0 and max D 1
2.7 Using the data for age given 13, 15, 16, 16, 19, 20, 20, 21, 22, 22,
25, 25, 25, 25, 30, 33, 33, 35, 35, 35, 35, 36, 40, 45, 46, 52, 70.
(a) Discretize data into 3 ranges of the strata “youth,” “middle-aged,”
and “senior.”
51
Summary
Data quality: accuracy, completeness, consistency, timeliness,
believability, interpretability
Data cleaning: e.g. missing/noisy values, outliers
Data integration from multiple sources:
Entity identification problem
Remove redundancies
Detect inconsistencies
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
52