Data Mining:
Concepts and Techniques
— Chapter 2 —
Jiawei Han, Micheline Kamber, and Jian Pei
University of Illinois at Urbana-Champaign
Simon Fraser University
©2011 Han, Kamber, and Pei. All rights reserved.
1
Chapter 2: Getting to Know Your Data
◼ Data Objects and Attribute Types
◼ Basic Statistical Descriptions of Data
◼ Data Visualization
◼ Measuring Data Similarity and Dissimilarity
◼ Summary
2
Data Visualization
◼ Why data visualization?
◼ Gain insight into an information space by mapping data onto graphical
primitives
◼ Provide qualitative overview of large data sets
◼ Search for patterns, trends, structure, irregularities, relationships among
data
◼ Help find interesting regions and suitable parameters for further
quantitative analysis
◼ Provide a visual proof of computer representations derived
◼ Categorization of visualization methods:
◼ Pixel-oriented visualization techniques
◼ Geometric projection visualization techniques
◼ Icon-based visualization techniques
◼ Hierarchical visualization techniques
◼ Visualizing complex data and relations
3
Chapter 2: Getting to Know Your Data
◼ Data Objects and Attribute Types
◼ Basic Statistical Descriptions of Data
◼ Data Visualization
◼ Measuring Data Similarity and Dissimilarity
◼ Summary
4
Similarity and Dissimilarity
◼ Similarity
◼ Numerical measure of how alike two data objects are
◼ Value is higher when objects are more alike
◼ Often falls in the range [0,1]
◼ Dissimilarity (e.g., distance)
◼ Numerical measure of how different two data objects
are
◼ Lower when objects are more alike
◼ Minimum dissimilarity is often 0
◼ Upper limit varies
◼ Proximity refers to a similarity or dissimilarity
5
Data Matrix and Dissimilarity Matrix
◼ Data matrix
◼ n data points with p x11 ... x1f ... x1p
dimensions ... ... ... ... ...
x xip
◼ Two modes
... xif ...
i1
... ... ... ... ...
x ... xnf ... xnp
n1
◼ Dissimilarity matrix
0
◼ n data points, but d(2,1)
0
registers only the
d(3,1) d ( 3,2) 0
distance
◼ A triangular matrix
: : :
d ( n,1) d ( n,2) ... ... 0
◼ Single mode
6
Proximity Measure for Nominal Attributes
◼ Can take 2 or more states, e.g., red, yellow, blue,
green (generalization of a binary attribute)
◼ Method 1: Simple matching
◼ m: # of matches, p: total # of variables
d (i, j) = p −
p
m
◼ Method 2: Use a large number of binary attributes
◼ creating a new binary attribute for each of the
M nominal states
7
Proximity Measure for Binary Attributes
Object j
◼ A contingency table for binary data
Object i
◼ Distance measure for symmetric
binary variables:
◼ Distance measure for asymmetric
binary variables:
◼ Jaccard coefficient (similarity
measure for asymmetric binary
variables):
◼ Note: Jaccard coefficient is the same as “coherence”:
8
Dissimilarity between Binary Variables
◼ Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
◼ Gender is a symmetric attribute
◼ The remaining attributes are asymmetric binary
◼ Let the values Y and P be 1, and the value N 0
0+1
d ( jack , mary ) = = 0.33
2+ 0+1
1+1
d ( jack , jim ) = = 0.67
1+1+1
1+ 2
d ( jim , mary ) = = 0.75
1+1+ 2
9
Standardizing Numeric Data
x
z= −
◼ Z-score:
◼ X: raw score to be standardized, μ: mean of the population, σ:
standard deviation
◼ the distance between the raw score and the population mean in
units of the standard deviation
◼ negative when the raw score is below the mean, “+” when above
◼ An alternative way: Calculate the mean absolute deviation
sf = 1
n (| x1 f − m f | + | x2 f − m f | +...+ | xnf − m f |)
where m = 1 (x + x + ... + x )
n 1f 2 f xif − m f
.
f nf
zif = sf
◼ standardized measure (z-score):
◼ Using mean absolute deviation is more robust than using standard
deviation
10
Example:
Data Matrix and Dissimilarity Matrix
Data Matrix
point attribute1 attribute2
x1 1 2
x2 3 5
x3 2 0
x4 4 5
Dissimilarity Matrix
(with Euclidean Distance)
x1 x2 x3 x4
x1 0
x2 3.61 0
x3 5.1 5.1 0
x4 4.24 1 5.39 0
11
Distance on Numeric Data: Minkowski Distance
◼ Minkowski distance: A popular distance measure
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two
p-dimensional data objects, and h is the order (the
distance so defined is also called L-h norm)
◼ Properties
◼ d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive definiteness)
◼ d(i, j) = d(j, i) (Symmetry)
◼ d(i, j) d(i, k) + d(k, j) (Triangle Inequality)
◼ A distance that satisfies these properties is a metric
12
Special Cases of Minkowski Distance
◼ h = 1: Manhattan (city block, L1 norm) distance
◼ E.g., the Hamming distance: the number of bits that are
different between two binary vectors
d (i, j) =| x − x | + | x − x | +...+ | x − x |
i1 j1 i2 j 2 ip jp
◼ h = 2: (L2 norm) Euclidean distance
d (i, j) = (| x − x |2 + | x − x |2 +...+ | x − x |2 )
i1 j1 i2 j2 ip jp
◼ h → . “supremum” (Lmax norm, L norm) distance.
◼ This is the maximum difference between any component
(attribute) of the vectors
13
Example: Minkowski Distance
Dissimilarity Matrices
point attribute 1 attribute 2 Manhattan (L1)
x1 1 2
L x1 x2 x3 x4
x2 3 5 x1 0
x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0
Euclidean (L2)
L2 x1 x2 x3 x4
x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0
Supremum
L x1 x2 x3 x4
x1 0
x2 3 0
x3 2 5 0
x4 3 1 5 0
14
Ordinal Variables
◼ An ordinal variable can be discrete or continuous
◼ Order is important, e.g., rank
◼ Can be treated like interval-scaled
◼ replace xif by their rank rif {1,...,M f }
◼ map the range of each variable onto [0, 1] by replacing
i-th object in the f-th variable by
rif −1
zif =
M f −1
◼ compute the dissimilarity using methods for interval-
scaled variables
15
Attributes of Mixed Type
◼ A database may contain all attribute types
◼ Nominal, symmetric binary, asymmetric binary, numeric,
ordinal
◼ One may use a weighted formula to combine their effects
pf = 1 ij( f ) dij( f )
d (i, j) =
pf = 1 ij( f )
◼ f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
◼ f is numeric: use the normalized distance
◼ f is ordinal
◼ Compute ranks r if and r −1
zif = if
◼ Treat zif as interval-scaled M −1 f
16
Cosine Similarity
◼ A document can be represented by thousands of attributes, each
recording the frequency of a particular word (such as keywords) or
phrase in the document.
◼ Other vector objects: gene features in micro-arrays, …
◼ Applications: information retrieval, biologic taxonomy, gene feature
mapping, ...
◼ Cosine measure: If d1 and d2 are two vectors (e.g., term-frequency
vectors), then
cos(d1, d2) = (d1 • d2) /||d1|| ||d2|| ,
where • indicates vector dot product, ||d||: the length of vector d
17
Example: Cosine Similarity
◼ cos(d1, d2) = (d1 • d2) /||d1|| ||d2|| ,
where • indicates vector dot product, ||d|: the length of vector d
◼ Ex: Find the similarity between documents 1 and 2.
d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0)
d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)
d1•d2 = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25
||d1||= (5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.5
= 6.481
||d2||= (3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=(17)0.5
= 4.12
cos(d1, d2 ) = 0.94
18
Chapter 2: Getting to Know Your Data
◼ Data Objects and Attribute Types
◼ Basic Statistical Descriptions of Data
◼ Data Visualization
◼ Measuring Data Similarity and Dissimilarity
◼ Summary
19
Summary
◼ Data attribute types: nominal, binary, ordinal, interval-scaled, ratio-
scaled
◼ Many types of data sets, e.g., numerical, text, graph, Web, image.
◼ Gain insight into the data by:
◼ Basic statistical data description: central tendency, dispersion,
graphical displays
◼ Data visualization: map data onto graphical primitives
◼ Measure data similarity
◼ Above steps are the beginning of data preprocessing.
◼ Many methods have been developed but still an active area of research.
20
References
◼ W. Cleveland, Visualizing Data, Hobart Press, 1993
◼ T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
◼ U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in Data Mining and
Knowledge Discovery, Morgan Kaufmann, 2001
◼ L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster
Analysis. John Wiley & Sons, 1990.
◼ H. V. Jagadish, et al., Special Issue on Data Reduction Techniques. Bulletin of the Tech.
Committee on Data Eng., 20(4), Dec. 1997
◼ D. A. Keim. Information visualization and visual data mining, IEEE trans. on
Visualization and Computer Graphics, 8(1), 2002
◼ D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
◼ S. Santini and R. Jain,” Similarity measures”, IEEE Trans. on Pattern Analysis and
Machine Intelligence, 21(9), 1999
◼ E. R. Tufte. The Visual Display of Quantitative Information, 2nd ed., Graphics Press,
2001
◼ C. Yu , et al., Visual data mining of multimedia data for social and behavioral studies,
Information Visualization, 8(1), 2009
21