Académique Documents
Professionnel Documents
Culture Documents
Artificial
Intelligence
• Neural Networks please read chapter 19.
• Genetic Algorithms
• Genetic Programming
• Behavior-Based Systems
Background
- Biological models
- Artificial models
Human Computer
Processing 100 Billion 10 Million
Elements neurons gates
Interconnects 1000 per A few
neuron
Cycles per sec 1000 500 Million
2X 200,000 2 Years
improvement Years
Brain vs. Digital Computers (2)
experiential knowledge.
What is
Artificial
Neural
Network?
Neurons vs. Units (1)
Chemistry,
biochemistry,
quantumness.
Computing Elements
A typical unit:
Planning in building a Neural Network
Input function:
Activation function g:
A Computing Unit.
Now in more detail but for a
particular model only
1) Step function
2) Sign function
3) Sigmoid function
- Arranged in layers.
H3 H5
I1
W35 = 1
W13 = -1 t = -0.5 t = 1.5
W57 = 1
I1
W25 = 1
O7 t = 0.5
W16 = 1
I2
W67 = 1
W24= -1 t = -0.5
t = 1.5
W46 = 1
H4 H6
-Can oscillate.
Recurrent Network (2)
• What is a bias?
• How can we learn the bias value?
• Its output is
– 1, if W1X1 + W2X2 >
– 0, otherwise
For example:
Majority function - will be covered in this lecture.
majority
Example of
3Dimensional space
Perceptrons & XOR
• XOR function
Bad news:
- There are not many linearly separable functions.
Good news:
- There is a perceptron algorithm that will learn
any linearly separable function, given enough
training examples.
Learning Linearly Separable Functions (2)
Here decision
tree is better
than
perceptron
• Braitenberg Vehicles
• Quantum Neural BV
Evaluation of a Feedforward NN using software is easy
1. Computes the error term for the output units using the
observed error.
weights
Error as function of
weights in
multidimensional space
Compute
deltas Gradient
• Trying to make error decrease the fastest
• Compute:
• GradE = [dE/dw1, dE/dw2, . . ., dE/dwn]
• Change i-th weight by
• deltawi = -alpha * dE/dwi
Derivatives of error for weights
• We need a derivative!
• Activation function must be continuous,
differentiable, non-decreasing, and easy to
compute
Can’t use LTU
• To effectively assign credit / blame to units
in hidden layers, we want to look at the
first derivative of the activation function
derivative
alpha Here we have
general formula with
derivative, next we
use for sigmoid miss
Updating interior weights
• Layer k units provide values to all layer
k+1 units
• “miss” is sum of misses from all units on k+1
• missj = [ ai(1- ai) (Ti - ai) wji ]
• weights coming into this unit are adjusted
based on their contribution
deltakj = * Ik * aj * (1 - aj) * missj For layer k+1
Compute deltas
How do we pick ?
1. Tuning set, or
2. Cross validation, or
In the restaurant
problem NN was
worse than the
decision tree
See next
slide for
explanation
Visualization of
Backpropagation
learning
Calculate difference to
desired output
Compute the
error in output
Update weights
to output layer
Compute error in
each hidden layer
Update weights in
each hidden layer
Return learned network
Examples and
Applications
of ANN
Neural Network in Practice
• Results
– 95% accuracy on the training data
– 78% accuracy on the test set
Other Examples
• Neurogammon (Tesauro & Sejnowski, 1989)
– Backgammon learning program
– 4 hidden units
• Interactive
– activation propagates forward & backwards
– propagation continues until equilibrium is reached in
the network
– We do not discuss these networks here, complex
training. May be unstable.
Ways of learning with an ANN
• Add nodes & connections
• Subtract nodes & connections
• Modify connection weights
– current focus
– can simulate first two
• I/O pairs:
– given the inputs, what should the output be?
[“typical” learning problem]
More Neural Network
Applications
- May provide a model for massive parallel computation.
ex2. Nestor:
- Uses Nestor Learning System (NLS).
- Several multi-layered feed-forward neural networks.
- Intel has made such a chip - NE1000 in VLSI technology.
Ex1. Software tool - Enterprise Miner
•Software exists.
Summary
- Neural network is a computational model that simulate
some properties of the human brain.
Eric Wong
Eddy Li
Martin Ho
Kitty Wong