Decision Tree
Decision Tree
+
-
When to use Decision Trees
§ Problem characteristics:
§ Instances can be described by attribute value pairs
§ Target function is discrete valued
§ Disjunctive hypothesis may be required
§ Possibly noisy training data samples
§ Robust to errors in training data
§ Missing attribute values
§ Different classification problems:
§ Equipment or medical diagnosis
§ Credit risk analysis
§ Several tasks in natural language processing
Top-down induction of Decision Trees
§ ID3 (Quinlan, 1986) is a basic algorithm for learning DT's
§ Given a training set of examples, the algorithms for building DT
performs search in the space of decision trees
§ The construction of the tree is top-down. The algorithm is greedy.
§ The fundamental question is “which attribute should be tested next?
Which question gives us more information?”
§ Select the best attribute
§ A descendent node is then created for each possible value of this
attribute and examples are partitioned according to this value
§ The process is repeated for each successor node until all the
examples are classified correctly or there are no attributes left
Which attribute is the best classifier?
{D1, D2, D8} {D9, D11} {D4, D5, D10} {D6, D14}
No Yes Yes No
ID3: algorithm
ID3(X, T, Attrs) X: training examples:
T: target attribute (e.g. PlayTennis),
Attrs: other attributes, initially all attributes
Create Root node
If all X's are +, return Root with class +
If all X's are –, return Root with class –
If Attrs is empty return Root with class most common value of T in X
else
A ¬ best attribute; decision attribute for Root ¬ A
For each possible value vi of A:
- add a new branch below Root, for test A = vi
- Xi ¬ subset of X with A = vi
- If Xi is empty then add a new leaf with class the most common value of T in X
else add the subtree generated by ID3(Xi, T, Attrs - {A})
return Root
Search space in Decision Tree learning
§ The search space is made by
partial decision trees
§ The algorithm is hill-climbing
§ The evaluation function is
information gain
§ The hypotheses space is complete
(represents all discrete-valued
functions)
§ The search maintains a single
current hypothesis
§ No backtracking; no guarantee of
optimality
§ It uses all the available examples
(not incremental)
§ May terminate earlier, accepting
noisy classes
Inductive bias in decision tree learning
(Outlook=Sunny)Ù(Humidity=High) ⇒ (PlayTennis=No)
Why converting to rules?
§ Each distinct path produces a different rule: a condition
removal may be based on a local (contextual) criterion. Node
pruning is global and affects all the rules
§ In rule form, tests are not ordered and there is no book-
keeping involved when conditions (nodes) are removed
§ Converting to rules improves readability for humans
Dealing with continuous-valued attributes
§ So far discrete values for attributes and for outcome.
§ Given a continuous-valued attribute A, dynamically create a
new attribute Ac
Ac = True if A < c, False otherwise
§ How to determine threshold value c ?
§ Example. Temperature in the PlayTennis example
§ Sort the examples according to Temperature
Temperature 40 48 | 60 72 80 | 90
PlayTennis No No 54 Yes Yes Yes 85 No
§ Determine candidate thresholds by averaging consecutive values where
there is a change in classification: (48+60)/2=54 and (80+90)/2=85
§ Evaluate candidate thresholds (attributes) according to information gain.
The best is Temperature>54.The new attribute competes with the other
ones
Problems with information gain
§ Natural bias of information gain: it favours attributes with
many possible values.
§ Consider the attribute Date in the PlayTennis example.
§ Date would have the highest information gain since it perfectly
separates the training data.
§ It would be selected at the root resulting in a very broad tree
§ Very good on the training, this tree would perform poorly in predicting
unknown instances. Overfitting.
§ The problem is that the partition is too specific, too many small
classes are generated.
§ We need to look at alternative measures …
An alternative measure: gain ratio
c |Si | |Si |
SplitInformation(S, A) º − S log2
|S |
i=1 |S |
§ Si are the sets obtained by partitioning on value i of A
§ SplitInformation measures the entropy of S with respect to the values of A. The
more uniformly dispersed the data the higher it is.
Gain(S, A)
GainRatio(S, A) º
SplitInformation(S, A)
§ GainRatio penalizes attributes that split examples in many small classes such as
Date. Let |S |=n, Date splits examples in n classes
§ SplitInformation(S, Date)= −[(1/n log2 1/n)+…+ (1/n log2 1/n)]= −log21/n =log2n
§ Compare with A, which splits data in two even classes:
§ SplitInformation(S, A)= − [(1/2 log21/2)+ (1/2 log21/2) ]= − [− 1/2 −1/2]=1
Adjusting gain-ratio
§ Problem: SplitInformation(S, A) can be zero or very small
when |Si | ≈ |S | for some value i
§ To mitigate this effect, the following heuristics has been used:
1. compute Gain for each attribute
2. apply GainRatio only to attributes with Gain above average
§ Other measures have been proposed:
§ Distance-based metric [Lopez-De Mantaras, 1991] on the partitions of
data
§ Each partition (induced by an attribute) is evaluated according to the
distance to the partition that perfectly classifies the data.
§ The partition closest to the ideal partition is chosen
Handling incomplete training data
§ How to cope with the problem that the value of some attribute
may be missing?
§ Example: Blood-Test-Result in a medical diagnosis problem
§ The strategy: use other examples to guess attribute
1. Assign the value that is most common among the training examples at
the node
2. Assign a probability to each value, based on frequencies, and assign
values to missing attribute, according to this probability distribution
§ Missing values in new instances to be classified are treated
accordingly, and the most probable classification is chosen
(C4.5)
Handling attributes with different
costs
§ Instance attributes may have an associated cost: we would
prefer decision trees that use low-cost attributes
§ ID3 can be modified to take into account costs:
1. Tan and Schlimmer (1990)
Gain2(S, A)
Cost(S, A)
2. Nunez (1988)
2Gain(S, A) - 1
(Cost(A) + 1)w w ∈ [0,1]
Classification: Basic Concepts,
Decision Trees
Classification
!
Learning: Definition
Given a collection of records (training set)
– Each record contains a set of attributes, one of the
attributes is the class
! Find a model for the class attribute as a function of the
values of the other attributes
! Goal: previously unseen records should be assigned a class
as accurately as possible
– Use test set to estimate the accuracy of the model
– Often, the given data set is divided into training and test
sets, with training set used to build the model and test set
used to validate it
Illustrating Classification Learning
Tid Attrib1 Attrib2 Attrib3 Class
Learning
1 Yes Large 125K No
algorithm
2 No Medium 100K No
3 No Small 70K No
6 No Medium 60K No
Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
15 No Large 67K ?
10
Test Set
Examples of Classification Task
! Predicting tumor cells as benign or malignant
No
Refund
1 Yes Single 125K
Yes No
2 No Married 100K No
3 No Single 70K No NO MarSt
4 Yes Married 120K No Single, Divorced Married
5 No Divorced 95K Yes
TaxInc NO
6 No Married 60K No
No
< 80K > 80K
7 Yes Divorced 220K
8 No Single 85K Yes NO YES
9 No Married 75K No
10 No Single 90K Yes
10
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married Assign Cheat to “No”
TaxInc NO
< 80K > 80K
NO YES
Decision Tree Learning: ID3
! Function ID3(Training-set, Attributes)
– If all elements in Training-set are in same class, then return leaf
node labeled with that class
– Else if Attributes is empty, then return leaf node labeled with
majority class in Training-set
– Else if Training-Set is empty, then return leaf node labeled with
default majority class
– Else
u Select and remove A from Attributes
u Make A the root of the current tree
u For each value V of A
§ Create a branch of the current tree labeled by V
§ Partition_V ¬ Elements of Training-set with value V for A
§ Induce-Tree(Partition_V, Attributes)
§ Attach result to branch V
Illustrative Training Set
Risk Assessment for Loan Applications
Client # Credit History Debt Level Collateral Income Level RISK LEVEL
1 Bad High None Low HIGH
2 Unknown High None Medium HIGH
3 Unknown Low None Medium MODERATE
4 Unknown Low None Low HIGH
5 Unknown Low None High LOW
6 Unknown Low Adequate High LOW
7 Bad Low None Low HIGH
8 Bad Low Adequate High MODERATE
9 Good Low None High LOW
10 Good High Adequate High LOW
11 Good High None Low HIGH
12 Good High None Medium MODERATE
13 Good High None High LOW
14 Bad High None Medium HIGH
ID3 Example (I)
2) All examples are in the same class, HIGH.
1) Choose Income as root of tree.
Return Leaf Node.
Income
High Income
Low
Medium Low High
2) 3) 4) Medium
1,4,7,11 2,3,12,14 5,6,8,9,10,13
HIGH 2,3,12,14 5,6,8,9,10,13
3) Choose Debt Level as root of subtree. 3a) All examples are in the same class, MODERATE.
Return Leaf node.
Debt Level
Debt Level
Low High High
Low
3a) 3b) 3b)
3 2,12,14 MODERATE 2,12,14
ID3 Example (II)
3b’-3b’’’) All examples are in the same class.
3b) Choose Credit History as root of subtree.
Return Leaf nodes.
Credit History
Credit History
Unknown Good
Unknown Good
Bad
3b’) 3b’’) 3b’’’ Bad
2 14 ) 12
HIGH HIGH MODERATE
4) Choose Credit History as root of subtree. 4a-4c) All examples are in the same class.
Return Leaf nodes.
Credit History
Credit History
Unknown Good
Bad Unknown Good
4a) 4b) 4c) Bad
5,6 8 9,10,13
LOW MODERATE LOW
ID3 Example (III)
Attach subtrees at appropriate places.
Income
Low
Medium High
HIGH Debt Level Credit History
Low High Unknown Good
Bad
Unknown Good
Bad
G A2
B
R
A1
M F
G A2
B
Decision surfaces are axis-
aligned Hyper-rectangles
A1
M F
Non-Uniqueness
! Decision trees are not unique:
– Given a set of training instances T, there
generally exists a number of decision trees
Tid
that are consistent with (or
Refund Marital
Status
Taxable
Income fit)
MarSt
T
Cheat
Single,
1 Yes Single 125K No
Married Divorced
2 No Married 100K No
3 No Single 70K No NO Refund
4 Yes Married 120K No Yes No
5 No Divorced 95K Yes
No
NO TaxInc
6 No Married 60K
7 Yes Divorced 220K No
< 80K > 80K
8 No Single 85K Yes NO YES
9 No Married 75K No
10 No Single 90K Yes
10
ID3’s Question
Given a training set, which of all of the decision trees consistent
with that training set should we pick?
More precisely:
Given a training set, which of all of the decision trees consistent
with that training set has the greatest likelihood of correctly
classifying unseen instances of the population?
ID3’s (Approximate) Bias
§ ID3 (and family) prefers simpler decision trees
§ Occam’s Razor Principle:
§ “It is vain to do with more what can be
done with less...Entities should not be
multiplied beyond necessity.”
§ Intuitively:
§ Always accept the simplest answer that fits
the data, avoid unnecessary constraints
§ Simpler trees are more general
ID3’s Question Revisited
! Since ID3 builds a decision tree by recursively selecting
attributes and splitting the training data based on the values
of these attributes
Practically:
Given a training set, how do we select attributes so that the
resulting tree is as small as possible, i.e. contains as few
attributes as possible?
Not All Attributes Are Created Equal
! Each attribute of an instance may be thought
of as contributing a certain amount of
information to its classification
– Think 20-Questions:
u What are good questions?
u Ones whose answers maximize information gained
– For example, determine shape of an object:
u Num. sides contributes a certain amount of information
u Color contributes a different amount of information
Entropy (t ) = - å p ( j | t ) log p ( j | t )
j
(where p( j | t) is the relative frequency of class j at node t)
è n ø
split i =1
A? B?
Yes No Yes No
E1 E2 E3 E4
E12 E34
Gain = E0 – E12 vs. E0 – E34
Gain Ratio
! Gain Ratio:
GAIN n n
GainRATIO = SplitINFO = - å log
Split k
i i
SplitINFO
split
n n i =1
CarType
Family Luxury
Sports
! Binary split: Divide values into two subsets
CarType CarType
{Sports, OR {Family,
Luxury} {Family} Luxury} {Sports}
§ Underfitting:
§ The model is too simple, so that both training and test errors are large
Detecting Overfitting
Overfitting
Overfitting in Decision Tree Learning
! Overfitting results in decision trees that are more complex than
necessary
– Tree growth went too far
– Number of instances gets smaller as we build the tree (e.g., several
leaves match a single example)
Class = + Class =
Q R
S 0 Q 1
0 1 S 0
0 1
Outlook
Sunny Rain
Overcast
Humidity Wind
High Normal Strong Weak
88
ID3: The Basic Decision Tree
Learning Algorithm
§ Database, See [Mitchell, p. 59]
89
ID3 (Cont’d)
Outlook
Sunny Rain
Overcast
D1 D8 D10 D6
D3
D14
D11 D12 D4
D9 D2 D7 D5
D13
92
What Attribute to choose to “best” split
a node? (Cont’d)
§ We want to use this measure to choose an attribute that minimizes
the disorder in the partitions it creates. Let {S_i | 1£i £n} be a
partition of S resulting from a particular attribute. The disorder
associated with this partition is:
V({S_i | 1£i £n})=S|S_i|/|S|.I({P(S_i),N(S_i)})
93
Hypothesis Space Search in Decision
Tree Learning
§ Hypothesis Space: Set of possible decision trees (i.e., complete
space of finie discrete-valued functions).
§ Search Method: Simple-to-Complex Hill-Climbing Search (only
a single current hypothesis is maintained (¹ from candidate-
elimination method)). No Backtracking!!!
§ Evaluation Function: Information Gain Measure
§ Batch Learning: ID3 uses all training examples at each step to
make statistically-based decisions (¹ from candidate-
elimination method which makes decisions incrementally). ==>
the search is less sensitive to errors in individual training
examples.
94
Inductive Bias in Decision Tree
Learning
§ ID3’s Inductive Bias: Shorter trees are preferred over
longer trees. Trees that place high information gain
attributes close to the root are preferred over those that
do not.
§ Note: this type of bias is different from the type of bias
used by Candidate-Elimination: the inductive bias of ID3
follows from its search strategy (preference or search
bias) whereas the inductive bias of the Candidate-
Elimination algorithm follows from the definition of its
hypothesis space (restriction or language bias).
95
Why Prefer Short Hypotheses?
§ Occam’s razor:
Prefer the simplest hypothesis that fits the data [William
of Occam (Philosopher), circa 1320]
§ Scientists seem to do that: E.g., Physicist seem to prefer simple
explanations for the motion of planets, over more complex ones
§ Argument: Since there are fewer short hypotheses than long ones, it is
less likely that one will find a short hypothesis that coincidentally fits the
training data.
§ Problem with this argument: it can be made about many other
constraints. Why is the “short description” constraint more relevant than
others?
§ Nevertheless: Occam’s razor was shown experimentally to be a
successful strategy!
96
Issues in Decision Tree Learning: I.
Avoiding Overfitting the Data
§ Definition: Given a hypothesis space H, a hypothesis hÎH is
said to overfit the training data if there exists some
alternative hypothesis h’ÎH, such that h has smaller error
than h’ over the training examples, but h’ has a smaller error
than h over the entire distribution of instances. (See curves in
[Mitchell, p.67])
§ There are two approaches for overfitting avoidance in
Decision Trees:
§ Stop growing the tree before it perfectly fits the data
§ Allow the tree to overfit the data, and then post-prune it.
97
Issues in Decision Tree Learning: II. Other
Issues
§ Incorporating Continuous-Valued Attributes
§ Alternative Measures for Selecting Attributes
§ Handling Training Examples with Missing Attribute Values
§ Handling Attributes with Differing Costs
98