Académique Documents
Professionnel Documents
Culture Documents
Two Layer Back propagation Artificial Neural
Back propagation Artificial Neural
Network
Th.Tangkeshwar Singh
Asstt Prof (Senior Scale)
Asstt. Prof.(Senior Scale)
Department of Computer Science
M.U.
Two‐Layer Back propagation Artificial Neural
Network
I INPUT LAYER J HIDDEN LAYER K OUTPUT
LAYER
Training Steps
Training Steps
Forward Pass:
• 1. Select the next training pair from the training set; apply
the input vector to the network input
• 2.
2 Calculate the output of the network
Calculate the output of the network
O=F(XW) in vector notation
Backward Pass:
• 3. Calculate the error between the network output and the
p
desired output(the target vector from the training pair).
• 4. Adjust the weights of the network in a way that
minimizes the error
minimizes the error.
• 5. Repeat steps 1 through 4 for each vector in the training
set until the error for the entire set is acceptably low.
• REVERSE PASS:
• Adjusting the weights of the output layer:
• δ =OUT(1‐OUT)(Target –OUT)
=OUT(1 OUT)(Target –OUT)
Adjusting the weights of the Hidden layers:
• Hidd
Hidden layers have no target vector.
l h
• Backpropagation trains the hidden layers by
propagating the output error back through the
propagating the output error back through the
network layer by layer adjusting weights at each layer.
• For hidden layers δ must be generated without benefit
of a target vector.
• The value of δ needed for the hidden layer neuron is
produced by summing all products (Each weight is
produced by summing all products (Each weight is
multiplied by the δ value of the neuron to which it
connects in the output layer) and multiplying by the
p y ) py g y
derivative of the squashing function:
• DERIVATIVE OF SIGMOID TRANSFER FUNCTION
• ERROR CALCULATION
mean squared error or just MSE