Académique Documents
Professionnel Documents
Culture Documents
read Chapter 2] suggested exercises 2.2, 2.3, 2.4, 2.6] Learning from examples General-to-speci c ordering over hypotheses Version spaces and candidate elimination algorithm Picking new examples The need for inductive bias Note: simple approach assuming no noise, illustrates key concepts
22
23
Representing Hypotheses
Many possible representations Here, h is conjunction of constraints on attributes Each constraint can be a spec c value (e.g., Water = Warm) don't care (e.g., \Water =?") no value allowed (e.g.,\Water=;") For example, Sky AirTemp Humid Wind Water Forecst hSunny ? ? Strong ? Samei
24
25
hypothesis found to approximate the target function well over a su ciently large set of training examples will also approximate the target function well over other unobserved examples.
26
Specific
x1 x
2
h 1 h 2
General
x1= <Sunny, Warm, High, Strong, Cool, Same> x = <Sunny, Warm, High, Light, Warm, Same> 2
27
Find-S Algorithm
1. Initialize h to the most speci c hypothesis in H 2. For each positive training instance x For each attribute constraint ai in h If the constraint ai in h is satis ed by x Then do nothing Else replace ai in h by the next more general constraint that is satis ed by x 3. Output hypothesis h
28
x3
Specific
+ x1
x+
2
h 2,3
+ x4
General
h = <, , , , , > 0 h1 = <Sunny Warm Normal Strong Warm Sam h2 = <Sunny Warm ? Strong Warm Same> h = <Sunny Warm ? Strong Warm Same> 3 h = <Sunny Warm ? Strong ? ? > 4
29
30
Version Spaces
A hypothesis h is consistent with a set of training examples D of target concept c if and only if h(x) = c(x) for each training example hx; c(x)i in D. Consistent(h; D) (8hx; c(x)i 2 D) h(x) = c(x) The version space, V SH;D, with respect to hypothesis space H and training examples D, is the subset of hypotheses from H consistent with all training examples in D. V SH;D fh 2 H jConsistent(h; D)g
31
32
G:
33
34
35
with d { For each hypothesis g in G that is not consistent with d Remove g from G Add to G all minimal specializations h of g such that 1. h is consistent with d, and 2. some member of S is more speci c than h Remove from G any hypothesis that is less general than another hypothesis in G
36
Example Trace
S : 0
{<, , , , , >}
G 0:
{<?, ?, ?, ?, ?, ?>}
37
G:
38
G:
hSunny
Warm Normal Strong Cool Changei Cool Normal Light Warm Samei Warm Normal Light Warm Samei
hRainy hSunny
39
40
An UNBiased Learner
Idea: Choose H that expresses every teachable concept (i.e., H is the power set of X ) Consider H = disjunctions, conjunctions, negations over previous H . E.g., hSunny Warm Normal ? ? ?i _ :h? ? ? ? ? Changei What are S , G in this case? S G
0
41
Inductive Bias
Consider concept learning algorithm L instances X , target concept c training examples Dc = fhx; c(x)ig let L(xi; Dc) denote the classi cation assigned to the instance xi by L after training on data Dc.
42
Equivalent deductive system Training examples New instance Assertion " H contains the target concept" Theorem Prover Classification of new instance, or "dont know"
43
44
Summary Points
1. Concept learning as search through H 2. General-to-speci c ordering over H 3. Version space candidate elimination algorithm 4. S and G boundaries characterize learner's uncertainty 5. Learner can generate useful queries 6. Inductive leaps possible only if learner is biased 7. Inductive learners can be modelled by equivalent deductive systems
45