Data Mining:
Concepts and Techniques
May 18, 2023 Data Mining: Concepts and Techniques 1
S. B. Jain Institute of Technology Management
and Research, Nagpur
Department of Computer Science & Engineering
Year / Semester : III Year / VI Semester
Session : 2022-23 (EVEN)
Course Name & Code : [ Data Mining & Warehousing ][ PECCS602T ]
Course In-Charge : Mr. Aniket V. Bhoyar(Assistant Professor)
2
Mining Frequent Patterns, Association and
Correlations
Basic concepts and a road map
Efficient and scalable frequent itemset mining
methods
Mining various kinds of association rules
From association mining to correlation
analysis
Constraint-based association mining
Summary
May 18, 2023 Data Mining: Concepts and Techniques 3
What Is Frequent Pattern Analysis?
Frequent Pattern: A pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set
First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context
of frequent itemsets and association rule mining
Motivation: Finding inherent regularities in data
What products were often purchased together?— Beer and diapers?
What are the subsequent purchases after buying a PC?
What kinds of DNA are sensitive to this new drug?
Can we automatically classify web documents?
Applications
Basket data analysis, cross-marketing, catalog design, sale campaign
analysis, Web log (click stream) analysis, and DNA sequence analysis.
May 18, 2023 Data Mining: Concepts and Techniques 4
Why Is Freq. Pattern Mining Important?
Discloses an intrinsic and important property of data sets
Forms the foundation for many essential data mining tasks
Association, correlation, and causality analysis
Sequential, structural (e.g., sub-graph) patterns
Pattern analysis in spatiotemporal, multimedia, time-
series, and stream data
Classification: associative classification
Cluster Analysis: frequent pattern-based clustering
Data Warehousing: iceberg cube and cube-gradient
Semantic Data Compression: fascicles
Broad applications
May 18, 2023 Data Mining: Concepts and Techniques 5
Basic Concepts: Frequent Patterns and
Association Rules
Transaction-id Items bought Itemset X = {x1, …, xk}
10 A, B, D
Find all the rules X Y with minimum
20 A, C, D support and confidence
30 A, D, E
support, s, probability that a
40 B, E, F
transaction contains X Y
50 B, C, D, E, F
confidence, c, conditional
Customer Customer probability that a transaction
buys both buys diaper
having X also contains Y
Let supmin = 50%, confmin = 50%
Freq. Pat.: {A:3, B:3, D:4, E:3, AD:3}
Association rules:
Customer A D (60%, 100%)
buys beer
D A (60%, 75%)
May 18, 2023 Data Mining: Concepts and Techniques 6
Closed Patterns and Max-Patterns
A long pattern contains a combinatorial number of sub-
patterns, e.g., {a1, …, a100} contains (1001) + (1002) + … +
(110000) = 2100 – 1 = 1.27*1030 sub-patterns!
Solution: Min closed patterns and max-patterns instead
An itemset X is closed if X is frequent and there exists no
super-pattern Y כX, with the same support as X
(proposed by Pasquier, et al. @ ICDT’99)
An itemset X is a max-pattern if X is frequent and there
exists no frequent super-pattern Y כX (proposed by
Bayardo @ SIGMOD’98)
Closed pattern is a lossless compression of freq. patterns
Reducing the # of patterns and rules
May 18, 2023 Data Mining: Concepts and Techniques 7
Mining Frequent Patterns, Association
and Correlations
Basic concepts and a road map
Efficient and scalable frequent itemset mining
methods
Mining various kinds of association rules
From association mining to correlation
analysis
Constraint-based association mining
Summary
May 18, 2023 Data Mining: Concepts and Techniques 8
Scalable Methods for Mining Frequent Patterns
The downward closure property of frequent patterns
Any subset of a frequent itemset must be frequent
If {beer, diaper, nuts} is frequent, so is {beer,
diaper}
i.e., every transaction having {beer, diaper, nuts} also
contains {beer, diaper}
Scalable Mining Methods: Three major approaches
Apriori
Frequent Pattern Growth (FPGrowth)
Vertical Data Format Approach
May 18, 2023 Data Mining: Concepts and Techniques 9
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
Tid Items
L1 {A} 2
C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2
{B, C} 2 {A, E}
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}
C3 Itemset L3 Itemset sup
3rd scan
{B, C, E} {B, C, E} 2
May 18, 2023 Data Mining: Concepts and Techniques 10
The Apriori Algorithm
Pseudo-code:
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1
that are contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk;
May 18, 2023 Data Mining: Concepts and Techniques 11
Important Details of Apriori
How to generate candidates?
Step 1: self-joining Lk
Step 2: pruning
How to count supports of candidates?
Example of Candidate-generation
L3={abc, abd, acd, ace, bcd}
Self-joining: L3*L3
abcd from abc and abd
acde from acd and ace
Pruning:
acde is removed because ade is not in L3
C4={abcd}
May 18, 2023 Data Mining: Concepts and Techniques 12
How to Generate Candidates?
Suppose the items in Lk-1 are listed in an order
Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 <
q.itemk-1
Step 2: pruning
forall itemsets c in Ck do
forall (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck
May 18, 2023 Data Mining: Concepts and Techniques 13
Challenges of Frequent Pattern Mining
Challenges
Multiple scans of transaction database
Huge number of candidates
Tedious workload of support counting for candidates
Improving Apriori: general ideas
Reduce passes of transaction database scans
Shrink number of candidates
Facilitate support counting of candidates
May 18, 2023 Data Mining: Concepts and Techniques 14
Construct FP-Tree from a Transaction Database
TID Items bought (ordered) frequent items
100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m}
300 {b, f, h, j, o, w} {f, b} min_support = 3
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} {}
Header Table
1. Scan DB once, find
frequent 1-itemset Item frequency head f:4 c:1
(single item pattern) f 4
c 4 c:3 b:1 b:1
2. Sort frequent items in a 3
frequency descending b 3
order, f-list m 3
a:3 p:1
p 3
3. Scan DB again, m:2 b:1
construct FP-tree
F-list=f-c-a-b-m-p p:2 m:1
May 18, 2023 Data Mining: Concepts and Techniques 15
Benefits of the FP-tree Structure
Completeness
Preserve complete information for frequent pattern
mining
Never break a long pattern of any transaction
Compactness
Reduce irrelevant info—infrequent items are gone
Items in frequency descending order: the more
frequently occurring, the more likely to be shared
Never be larger than the original database (not count
node-links and the count field)
May 18, 2023 Data Mining: Concepts and Techniques 16
Partition Patterns and Databases
Frequent patterns can be partitioned into subsets
according to f-list
F-list=f-c-a-b-m-p
Patterns containing p
Patterns having m but no p
…
Patterns having c but no a nor b, m, p
Pattern f
Completeness and non-redundency
May 18, 2023 Data Mining: Concepts and Techniques 17
Find Patterns Having P From P-conditional Database
Starting at the frequent item header table in the FP-tree
Traverse the FP-tree by following the link of each frequent item p
Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base
{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1 c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1 m fca:2, fcab:1
p:2 m:1 p fcam:2, cb:1
May 18, 2023 Data Mining: Concepts and Techniques 18
From Conditional Pattern-Bases to Conditional FP-trees
For each pattern-base
Accumulate the count for each item in the base
Construct the FP-tree for the frequent items of the
pattern base
m-conditional pattern base:
{} fca:2, fcab:1
Header Table
Item frequency head All frequent
f:4 c:1 patterns relate to m
f 4 {}
c 4 c:3 b:1 b:1 m,
a 3 f:3 fm, cm, am,
b 3 a:3 p:1 fcm, fam, cam,
m 3 c:3 fcam
p 3 m:2 b:1
p:2 m:1 a:3
m-conditional FP-tree
May 18, 2023 Data Mining: Concepts and Techniques 19
Mining Frequent Patterns With FP-Trees
Idea: Frequent Pattern Growth
Recursively grow frequent patterns by pattern and
database partition.
Method
For each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree
Repeat the process on each newly created conditional
FP-tree
Until the resulting FP-tree is empty, or it contains only
one path—single path will generate all the
combinations of its sub-paths, each of which is a
frequent pattern.
May 18, 2023 Data Mining: Concepts and Techniques 20
Why Is FP-Growth the Winner?
Divide-and-conquer:
decompose both the mining task and DB according to
the frequent patterns obtained so far
leads to focused search of smaller databases
Other factors
no candidate generation, no candidate test
compressed database: FP-tree structure
no repeated scan of entire database
basic ops—counting local freq items and building sub
FP-tree, no pattern search and matching
May 18, 2023 Data Mining: Concepts and Techniques 21
Mining Frequent Patterns, Association
and Correlations
Basic concepts and a road map
Efficient and scalable frequent itemset mining
methods
Mining various kinds of association rules
From association mining to correlation
analysis
Constraint-based association mining
Summary
May 18, 2023 Data Mining: Concepts and Techniques 22
Mining Various Kinds of Association Rules
Mining multilevel association
Miming multidimensional association
Mining quantitative association
Mining interesting correlation patterns
May 18, 2023 Data Mining: Concepts and Techniques 23
Mining Multiple-Level Association Rules
Items often form hierarchies.
Items at the lower level are expected to have lower support
Association rule generated from mining data at multiple levels
of abstraction are called multiple level or multi level
association rule.
Multilevel association rules can be mined efficiently using
concept hierarchies under a support confidence framework.
uniform support reduced support
Level 1
Milk Level 1
min_sup = 5%
[support = 10%] min_sup = 5%
Level 2 2% Milk Skim Milk Level 2
min_sup = 5% [support = 6%] [support = 4%] min_sup = 3%
May 18, 2023 Data Mining: Concepts and Techniques 24
Cont…
Same/uniform minimum support threshold is used when mining and
each level of abstraction
Method is simple because users are required to specify only one minimum
support threshold
The search avoids examining itemset containing any item whose ancestor do
not have minimum support
if the minimum support value is set to high it could miss some meaningful
association occurring at low level.
If threshold is set too low it may generate many uninteresting association
occurring at high abstraction level.
Reduced support: Each level of abstraction has its own minimum support
value.
The deeper the level of abstraction the smaller the corresponding value
Item or group based support: It is some time desirable to set up user
specific, items or group based minimal support threshold in mining multi
level rules.
May 18, 2023 Data Mining: Concepts and Techniques 25
Mining Multi-Dimensional Association
Single-dimensional rules:
buys(X, “milk”) buys(X, “bread”)
Multi-dimensional rules: 2 dimensions or predicates
Inter-dimension assoc. rules (no repeated predicates)
age(X,”19-25”) occupation(X,“student”) buys(X, “coke”)
Hybrid-dimension assoc. rules (repeated predicates)
age(X,”19-25”) buys(X, “popcorn”) buys(X, “coke”)
Categorical Attributes: finite number of possible values, no
ordering among values—Data cube approach
Quantitative Attributes: numeric, implicit ordering among
values - dynamic discretization, clustering, and gradient
approaches.
May 18, 2023 Data Mining: Concepts and Techniques 26
Static Discretization of Quantitative Attributes
Discretized prior to mining using concept hierarchy.
Numeric values are replaced by ranges.
In relational database, finding all frequent k-predicate sets
will require k or k+1 table scans.
Data cube is well suited for mining. ()
The cells of an n-dimensional
(age) (income) (buys)
cuboid correspond to the
predicate sets.
(age, income) (age,buys) (income,buys)
Mining from data cubes
can be much faster.
(age,income,buys)
May 18, 2023 Data Mining: Concepts and Techniques 27
Quantitative Association Rules
Numeric attributes are dynamically discretized
Such that the confidence or compactness of the rules
mined is maximized
2-D quantitative association rules: Aquan1 Aquan2 Acat
Cluster adjacent
association rules
to form general
rules using a 2-D grid
Example
age(X,”34-35”) income(X,”30-50K”)
buys(X,”high resolution TV”)
May 18, 2023 Data Mining: Concepts and Techniques 28
Mining Frequent Patterns, Association
and Correlations
Basic concepts and a road map
Efficient and scalable frequent itemset mining
methods
Mining various kinds of association rules
From association mining to correlation analysis
Constraint-based association mining
Summary
May 18, 2023 Data Mining: Concepts and Techniques 29
Interestingness Measure: Correlations (Lift)
P( A B)
lift
P( A) P( B)
May 18, 2023 Data Mining: Concepts and Techniques 30
Cont…
Association rules mined using a support confidence Framework are useful
for many applications but sometimes it may be misleading because it may
identify a rule A B as interesting even when the occurrence of A does
not imply the occurrence of B.
We consider the alternative framework for finding interesting relationship
between data itemset based on correlation.
The occurrence of itemset A is independent of the occurrence of itemset in
B if the correlation(lift) is 1.
If Correlation is less than 1 then the occurrence of A is negatively
correlated with the occurrence of B.
If the value is greater than 1 then A and B are positively corelated it means
the occurrence of one implies the occurrence of other.
May 18, 2023 Data Mining: Concepts and Techniques 31
Mining Frequent Patterns, Association
and Correlations
Basic concepts and a road map
Efficient and scalable frequent itemset mining
methods
Mining various kinds of association rules
From association mining to correlation analysis
Constraint-based association mining
Summary
May 18, 2023 Data Mining: Concepts and Techniques 32
Constraint-Based (Query-Directed) Mining
Finding all the patterns in a database autonomously? —
unrealistic!
The patterns could be too many but not focused!
Data mining should be an interactive process
User directs what to be mined using a data mining
query language (or a graphical user interface)
Constraint-Based Mining
User flexibility: provides constraints on what to be
mined
System optimization: explores such constraints for
efficient mining—constraint-based mining
May 18, 2023 Data Mining: Concepts and Techniques 33
Constraints in Data Mining
Knowledge Type Constraint: These specify the type of
knowledge to be mined such as classification, association,
etc.
Data Constraint: These specify the task relevant data.
Dimension/Level Constraint: These specify the desired
dimensions or attribute of the data or level of the concept
hierarchies to be used in mining.
Rule (or pattern) constraint: These specify the form of
rules to be mined.
Interestingness constraints: These specify thresholds on
statistical measures of rule interestingness such as
support, confidence and correlation.
strong rules: min_support 3%, min_confidence
60%
May 18, 2023 Data Mining: Concepts and Techniques 34