Académique Documents
Professionnel Documents
Culture Documents
2)
Unit 5 : Learning
Topics
In figure 1, value of A is
computed as 10. This value
is stored for some future
use.
In figure 2, value of A
required again to solve new
game tree.
Instead of re-computing the
value of node A, rote
learning is being used and
stored value of node A is
directly applied.
The t terms are the values of the sixteen features that contribute to the
evaluation. The c terms are the coefficients that are attached to each of
these values. As learning progresses, the c values will change.
Their definition will depend on the use to which they are put. Classification
is an important component of many problem solving tasks.
Before classification can be done, the classes it will use must be defined:
1. Isolate a set of features that are relevant to the task domain. Define
each class by a weighted sum of values of these features. Exa: task
is weather prediction, the parameters can be measurements such as
rainfall, location of cold fronts etc.
2. Isolate a set of features that are relevant to the task domain. Define
each class as a structure composed of these features. Ex:
classifying animals, various features can be such things as color,
length of neck etc
The idea of producing a classification program that can evolve its own
class definitions is called concept learning or induction.
Figure shown above shows structural description of a arch. Figure (b) describe that
a brick is supported by two bricks. Figure (c) describe that a wedge is supported by
two bricks.
2.
3.
5. Explanation-Based Learning
Limitation of learning by example: Induction Learning requires a
substantial number of training instances for describing complex
concept .
But human beings can learn quite a bit from single examples.
Humans dont need to see dozens of positive and negative
examples of fork( chess) positions in order to learn to avoid this
trap in the future and perhaps use it to our advantage.
What makes such single-example learning possible? The answer
is knowledge. Much of the recent work in machine learning has
moved away from the empirical, data intensive approach described
in the last section toward this more analytical knowledge intensive
approach.
A number of independent studies led to the characterization of this
approach as explanation-base learning (EBL). An EBL system
attempts to learn from a single example x by explaining why x is an
example of the target concept. The explanation is then
generalized, and then systems performance is improved through
the availability of this knowledge.
EBL cont
During the explanation step, the domain theory is used to prune away all
the unimportant aspects of the training example with respect to the goal
concept. What is left is an explanation of why the training example is an
instance of the goal concept. This explanation is expressed in terms that
satisfy the operationality criterion. The next step is to generalize the
explanation as far as possible while still describing the goal concept.
5.1.6 : Discovery
Learning is the process by which one entity acquires
knowledge. Usually that knowledge is already possessed by
some number of other entities who may serve as teachers.
Discovery is a restricted form of learning in which one
entity acquires knowledge without the help of a teacher.
Various types are as follows:
5.1.6.a Theory-Driven Discovery
5.1.6.b Data Driven Discovery
5.1.6.c Clustering
BACON also discover wide variety of scientific laws such as Keplers third
law, Ohms law, the conservation of momentum and Joules law.
Limitation: Much more work must be done in areas of science that BACON
does not model.
5.1.6.c : Clustering
Clustering is very similar to induction. In Inductive learning a
program learns to classify objects based on the labeling
provided by a teacher,
In clustering, no class labeling are provided. The program
must discover for itself the natural classes that exist for the
objects, in addition to a method for classifying instances.
AUTOCLASS is one program that accepts a number of
training cases and hypothesizes a set of classes. For any
given case, the program provides a set of probabilities that
predict into which classes the case is likely to fall.
In one application, AUTOCLASS found meaningful new
classes of stars from their infrared spectral data. This was an
instance of true discovery by computer, since the facts it
discovered were previously unknown to astronomy.
AUTOCLASS uses statistical Bayesian reasoning of the
type discussed.
5.1.7 : Analogy
The transformational analogy does not look at how the old problem
was solved. It only looks at final solution of old problem.
Often the internal details of old solution are also relevant to solve the
new problem. The detailed history of a problem solving episode is
called derivation.
Analogy which use these derivation in their new problem solving
process, is referred as Derivational Analogy.
Wild ?
Elephant
Elephant
Mouse
Giraffe
Dinosaur
Elephant
2.
3.
4.
5.
6.
7.
8.
Knowledge Acquisition
Subsystem
Knowledge Base
Inference Engine
User Interface
Blackboard (Workplace)
Explanation Subsystem (Justifier)
Knowledge Refining System
User
User Interface
Inference Engine
Knowledge Base
2. Knowledge Base
The knowledge base contains the knowledge necessary for
understanding, formulating, and solving problems
Two Basic Knowledge Base Elements
Facts
Special heuristics, or rules that direct the use of knowledge
Knowledge is the primary raw material of ES
Incorporated knowledge representation
3. Inference Engine:
The brain of the ES
The control structure (rule interpreter)
Provides methodology for reasoning
Major Elements are as follows:
1. Interpreter
2. Scheduler
3. Consistency Enforcer
4. User Interface:
Language processor for friendly, problem-oriented
communication
Consists of menus and graphics.
5. Blackboard (Workplace) :
ES Shell
Thanks