VARDHAMAN COLLEGE OF ENGINEERING
(AUTONOMOUS)
Affiliated to JNTUH, Approved by AICTE, Accredited by NAAC with A++ Grade, ISO 9001:2015 Certified
Kacharam, Shamshabad, Hyderabad - 501218, Telangana, India
CONCEPT LEARNING
Learning involves acquiring general concepts from specific training examples. Example:
People continually learn general concepts or categories such as "bird," "car," "situations
in which I should study more in order to pass the exam," etc.
Each such concept can be viewed as describing some subset of objects or events defined
over a larger set
Alternatively, each concept can be thought of as a Boolean-valued function defined over
this larger set. (Example: A function defined over all animals, whose value is true for
birds and false for other animals).
Definition: Concept learning - Inferring a Boolean-valued function from training
examples of its input and output
A CONCEPT LEARNING TASK
Consider the example task of learning the target concept "Days on which Aldo enjoyshis
favorite water sport”
Example Sky AirTemp Humidity Wind Wate Forecast Enjoy
r Sport
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cold High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
Table: Positive and negative training examples for the target concept EnjoySport.
The task is to learn to predict the value of EnjoySport for an arbitrary day, based on
thevalues of its other attributes?
What hypothesis representation is provided to the learner?
Let’s consider a simple representation in which each hypothesis consists of
aconjunction of constraints on the instance attributes.
Let each hypothesis be a vector of six constraints, specifying the values
of the six attributes Sky, AirTemp, Humidity, Wind, Water, and Forecast.
1
For each attribute, the hypothesis will either
Indicate by a "?' that any value is acceptable for this attribute,
Specify a single required value (e.g., Warm) for the attribute, or
Indicate by a "Φ" that no value is acceptable
If some instance x satisfies all the constraints of hypothesis h, then h classifies x as a
positive example (h(x) = 1).
The hypothesis that PERSON enjoys his favorite sport only on cold days with high humidityis
represented by the expression
(?, Cold, High, ?, ?, ?)
The most general hypothesis-that every day is a positive example-is represented by
(?, ?, ?, ?, ?, ?)
The most specific possible hypothesis-that no day is a positive example-is represented by
(Φ, Φ, Φ, Φ, Φ, Φ)
Notation
The set of items over which the concept is defined is called the set of instances,
which is denoted by X.
Example: X is the set of all possible days, each represented by the attributes: Sky,
AirTemp, Humidity, Wind, Water, and Forecast
The concept or function to be learned is called the target concept, which is denoted by
c. c can be any Boolean valued function defined over the instances X
c: X→ {O, 1}
Example: The target concept corresponds to the value of the attribute EnjoySport
(i.e., c(x) = 1 if EnjoySport = Yes, and c(x) = 0 if EnjoySport = No).
Instances for which c(x) = 1 are called positive examples, or members of the target concept.
Instances for which c(x) = 0 are called negative examples, or non-members of
the target concept.
The ordered pair (x, c(x)) to describe the training example consisting of the instance
x and its target concept value c(x).
D to denote the set of available training examples
The symbol H to denote the set of all possible hypotheses that the learner may consider
regarding the identity of the target concept. Each hypothesis h in H represents a Boolean-
valued function defined over X
h: X→{O, 1}
The goal of the learner is to find a hypothesis h such that h(x) = c(x) for all x in X.
2
Given:
Instances X: Possible days, each described by the attributes
Sky (with possible values Sunny, Cloudy, and Rainy),
AirTemp (with values Warm and Cold),
Humidity (with values Normal and High),
Wind (with values Strong and Weak),
Water (with values Warm and Cool),
Forecast (with values Same and Change).
Hypotheses H: Each hypothesis is described by a conjunction of constraints on the
attributes Sky, AirTemp, Humidity, Wind, Water, and Forecast. The constraints may
be "?" (any value is acceptable), “Φ” (no value is acceptable), or a specific value.
Target concept c: EnjoySport : X → {0, l}
Training examples D: Positive and negative examples of the target function
Determine:
A hypothesis h in H such that h(x) = c(x) for all x in X.
Table: The EnjoySport concept learning task.
The inductive learning hypothesis
Any hypothesis found to approximate the target function well over a sufficiently large set of
training examples will also approximate the target function well over other
unobservedexamples.
3
CONCEPT LEARNING AS SEARCH
Concept learning can be viewed as the task of searching through a large
space of hypotheses implicitly defined by the hypothesis representation.
The goal of this search is to find the hypothesis that best fits the training examples.
Example:
Consider the instances X and hypotheses H in the EnjoySport learning task. The attribute
Sky has three possible values, and AirTemp, Humidity, Wind, Water, Forecast each have
two possible values, the instance space X contains exactly
3.2.2.2.2.2 = 96 distinct instances
5.4.4.4.4.4 = 5120 syntactically distinct hypotheses within H.
Every hypothesis containing one or more "Φ" symbols represents the empty set of instances;
that is, it classifies every instance as negative.
1 + (4.3.3.3.3.3) = 973. Semantically distinct hypotheses
General-to-Specific Ordering of Hypotheses
Consider the two hypotheses
h1 = (Sunny, ?, ?, Strong, ?, ?)
h2 = (Sunny, ?, ?, ?, ?, ?)
Consider the sets of instances that are classified positive by hl and by h2.
h2 imposes fewer constraints on the instance, it classifies more instances as positive.
So, any instance classified positive by hl will also be classified positive by h2.
Therefore, h2 is more general than hl.
Given hypotheses hj and hk, hj is more-general-than or- equal do hk if and only if any
instancethat satisfies hk also satisfies hi
Definition: Let hj and hk be Boolean-valued functions defined over X. Then hj is more
general-than- or-equal-to hk (written hj ≥ hk) if and only if
( xX ) [(hk (x) = 1) → (hj (x) = 1)]
4
In the figure, the box on the left represents the set X of all instances, the box on the
right the set H of all hypotheses.
Each hypothesis corresponds to some subset of X-the subset of instances that it
classifies positive.
The arrows connecting hypotheses represent the more - general -than relation, with
the arrow pointing toward the less general hypothesis.
Note the subset of instances characterized by h2 subsumes the subset
characterized by hl , hence h2 is more - general– than h1
FIND-S: FINDING A MAXIMALLY SPECIFIC HYPOTHESIS
FIND-S Algorithm
1. Initialize h to the most specific hypothesis in H
2. For each positive training instance x
For each attribute constraint a in
i
h
If the constraint a is satisfied by x
i
Then do nothing
Else replace a in h by the next more general constraint that is satisfied by x
i
3. Output hypothesis h
5
To illustrate this algorithm, assume the learner is given the sequence of training
examplesfrom the EnjoySport task
Example Sky AirTemp Humidity Wind Water Forecast EnjoySport
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cold High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
The first step of FIND-S is to initialize h to the most specific hypothesis in H
h - (Ø, Ø, Ø, Ø, Ø, Ø)
Consider the first training example
x1 = <Sunny Warm Normal Strong Warm Same>, +
Observing the first training example, it is clear that hypothesis h is too specific.
None of the "Ø" constraints in h are satisfied by this example, so each is replaced by
the next more general constraint that fits the example
h1 = <Sunny Warm Normal Strong Warm Same>
Consider the second training example
x2 = <Sunny, Warm, High, Strong, Warm, Same>, +
The second training example forces the algorithm to further generalize h, this
time substituting a "?" in place of any attribute value in h that is not satisfied by
the new example
h2 = <Sunny Warm ? Strong Warm Same>
Consider the third training example
x3 = <Rainy, Cold, High, Strong, Warm, Change>, -
Upon encountering the third training the algorithm makes no change to h. The FIND-S
algorithm simply ignores every negative example.
h3 = < Sunny Warm ? Strong Warm Same>
Consider the fourth training example
x4 = <Sunny Warm High Strong Cool Change>, +
6
The fourth example leads to a further generalization of h
h4 = < Sunny Warm ? Strong ? ? >
The key property of the FIND-S algorithm
FIND-S is guaranteed to output the most specific hypothesis within H that is
consistent with the positive training examples
FIND-S algorithm’s final hypothesis will also be consistent with the negative
examples provided the correct target concept is contained in H, and provided the
training examples are correct.
Unanswered by FIND-S
1. Has the learner converged to the correct target concept?
2. Why prefer the most specific hypothesis?
3. Are the training examples consistent?
4. What if there are several maximally specific consistent hypotheses?
7
VERSION SPACES AND THE CANDIDATE-ELIMINATION ALGORITHM
The key idea in the CANDIDATE-ELIMINATION algorithm is to output a description of theset of all
hypotheses consistent with the training examples
Representation
Definition: consistent- A hypothesis h is consistent with a set of training examples D if and
only if h(x) = c(x) for each example (x, c(x)) in D.
Consistent (h, D) ( x, c(x) D) h(x) =
c(x)) Note difference between definitions of consistent and
satisfies
An example x is said to satisfy hypothesis h when h(x) = 1, regardless of
whether x is a positive or negative example of the target concept.
An example x is said to consistent with hypothesis h iff h(x) = c(x)
Definition: version space- The version space, denoted V S with respect to hypothesis
space
H, D
H and training examples D, is the subset of hypotheses from H consistent with the
training examples in D
V S {h H | Consistent (h, D)}
H, D
The LIST-THEN-ELIMINATION algorithm
The LIST-THEN-ELIMINATE algorithm first initializes the version space to contain
all hypotheses in H and then eliminates any hypothesis found inconsistent with any
training example.
VersionSpace c a list containing every hypothesis in H
1. For each training example, (x, c(x))
remove from VersionSpace any hypothesis h for which h(x) ≠ c(x)
2. Output the list of hypotheses in VersionSpace
The LIST-THEN-ELIMINATE Algorithm
List-Then-Eliminate works in principle, as long as version space is finite.
However, since it requires exhaustive enumeration of all hypotheses in practice
it is not feasible.
8
A More Compact Representation for Version Spaces
The version space is represented by its most general and least general members. These
members form general and specific boundary sets that delimit the version space within the
partially ordered hypothesis space.
Definition: The general boundary G, with respect to hypothesis space H and training data D,
is the set of maximally general members of H consistent with D
G {g H | Consistent (g, D)(g' H)[(g' g) Consistent(g', D)]}
g
Definition: The specific boundary S, with respect to hypothesis space H and training data D,
is the set of minimally general (i.e., maximally specific) members of H consistent with D.
S {s H | Consistent (s, D)(s' H)[(s s') Consistent(s', D)]}
g
CANDIDATE-ELIMINATION Learning Algorithm
The CANDIDATE-ELIMINTION algorithm computes the version space containing all
hypotheses from H that are consistent with an observed sequence of training examples .
Initialize G to the set of maximally general hypotheses
in H Initialize S to the set of maximally specific
hypotheses in H For each training example d, do
• If d is a positive example
• Remove from G any hypothesis inconsistent with d
• For each hypothesis s in S that is not consistent with d
• Remove s from S
• Add to S all minimal generalizations h of s such that
• h is consistent with d, and some member of G is more general than h
• Remove from S any hypothesis that is more general than another hypothesis in S
• If d is a negative example
• Remove from S any hypothesis inconsistent with d
• For each hypothesis g in G that is not consistent with d
• Remove g from G
• Add to G all minimal specializations h of g such that
• h is consistent with d, and some member of S is more specific than h
• Remove from G any hypothesis that is less general than another hypothesis in G
CANDIDATE- ELIMINTION algorithm using version spaces
9
An Illustrative Example
Example Sky AirTemp Humidity Wind Water Forecast EnjoySport
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cold High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
CANDIDATE-ELIMINTION algorithm begins by initializing the version space to the set
ofall hypotheses in H;
Initializing the G boundary set to contain the most general hypothesis in H
G0 ?, ?, ?, ?, ?, ?
Initializing the S boundary set to contain the most specific (least general) hypothesis
S0 , , , , ,
When the first training example is presented, the CANDIDATE-ELIMINTION algorithm
checks the S boundary and finds that it is overly specific and it fails to cover the positive
example.
The boundary is therefore revised by moving it to the least more general hypothesis that
covers this new example
No update of the G boundary is needed in response to this training example because G o
correctly covers this example
When the second training example is observed, it has a similar effect of generalizing
S further to S2, leaving G again unchanged i.e., G2 = G1 = G0
10
Consider the third training example. This negative example reveals that the G
boundaryof the version space is overly general, that is, the hypothesis in G
incorrectly predicts that this new example is a positive example.
The hypothesis in the G boundary must therefore be specialized until it correctly
classifies this new negative example
Given that there are six attributes that could be specified to specialize G 2, why are there
only three new hypotheses in G3?
For example, the hypothesis h = (?, ?, Normal, ?, ?, ?) is a minimal specialization of
G2 that correctly labels the new example as a negative example, but it is not included
in G3. The reason this hypothesis is excluded is that it is inconsistent with the
previously encountered positive examples
11
Consider the fourth training example.
This positive example further generalizes the S boundary of the version space. It
also results in removing one member of the G boundary, because this member fails
to cover the new positive example
After processing these four examples, the boundary sets S 4 and G4 delimit the version space
of all hypotheses consistent with the set of incrementally observed training examples.