Académique Documents
Professionnel Documents
Culture Documents
17/06/2011
FromBiologicalNeuronto ArtificialNeuron
Dendrite
CellBody
Axon
17/06/2011
Soma Dendrites
Soma Synapse
FromBiologyto ArtificialNeuralNetworks
17/06/2011
How Do Neural Networks Work? Theoutputofaneuronisafunctionofthe weightedsumoftheinputsplusabias Th f ti Thefunctionoftheentireneuralnetworkis f th ti l t ki simplythecomputationoftheoutputsofall theneurons Anentirelydeterministiccalculation
NetworkTraining
SupervisedTraining
Networkispresentedwiththeinputandthedesired output. Usesasetofinputsforwhichthedesiredoutputs results/classesareknown.The differencebetweenthe desiredandactualoutputisusedtocalculate adjustmenttoweightsoftheNNstructure
UnsupervisedTraining p g
Networkisnotshownthedesiredoutput. Conceptissimilartoclustering Ittriestocreateclassificationintheoutcome.
8
17/06/2011
TypesofANNs
SingleLayerPerceptron MultilayerPerceptrons (MLPs) RadialBasisFunctionNetworks(RBFs) HopfieldNetwork BoltzmannMachine Self Organization Map (SOM) SelfOrganizationMap(SOM) ModularNetworks(CommitteeMachines) Backpropagationmethod
The Perceptron
The operation of Rosenblatts perceptron is based on the McCulloch and Pitts neuron model. The model. model consists of a linear combiner followed by a d l i f li bi f ll db hard limiter. The weighted sum of the inputs is applied to the hard limiter. (Step or sign function)
10
17/06/2011
Linear Combiner
Hard Limiter
Output Y
w2
x2
Threshold
11
Sign function
Y +1 0 -1 X
Sigmoid function
Y +1 0 -1 X
Linear function
Y +1 0 -1 X
Y linear = X
12
17/06/2011
BackPropagationNeuralNetwork(BPN)
LearningRule: gradientdescentbaseddelta supervisedlearningruletominimizethetotal i dl i l t i i i th t t l errorproducedbynet. TrainingRulesteps:
Initializationofweight Feed forward Feedforward BackPropagationofError UpdatingofWeightanderror
Updatingweightswithlearningrateandmomentumfactor
Newweightisupdatingineachiteration
Initialweightissmallrandomvaluesliesbetween0.5to0.5or1to1. Vij(new) newWeightvectorforhiddenlayer. Wij(new) newWeightvectorforoutputlayer. Vij(old) previousiterationWeightvectorforhiddenlayer. Wij(old) previousiterationWeightvectorforoutputlayer. =Momentumfactor=0.0to1.0 = Learning factor from 0 0 to 20 =Learningfactorfrom0.0to20 Vij(t)=currentiterationweightofthehiddenlayer. Vij(t1)= previousiterationweightofthehiddenlayer. Wij(t)=currentiterationweightofthehiddenlayer. Wij(t1) =previousiterationweightofthehiddenlayer.
17/06/2011
Momentum Co-efficient CoConversely, if is too large, we may end up bouncing around the error surface out of control g g the algorithm diverges
BPNNetwork
InputLayer v11 x1 v12 v13 v21 x2 v22 v23 v31 v32 x3 v33 b1 b2 b3 b0=b1=b2=b3~1 z3 w3 b0 z2 w2
Activation function& Threshold
17/06/2011
Effectoflearningrate
Learningfactor LearningSpeed But Start with low learning rate and steadily ButStartwithlowlearningrateandsteadily increased. Whenincreasingthelearningrate,thenet performancewillbeincreased. PerformanceofANNnet Learningrate. g Learningrateisdifferentfromproblemto problem.
EffectofMomentumfactor
Thechangingtheweightinthedirectionthat isacombinationofcurrentweightvectorwith previousweightvector. Momentumfactorwillreducesthetimefor learningprocess. Momentfactoracceleratestheconvergence p y oferrorproducedbythenet. Whilewronglyassigningthisvaluegivewide changesintheweight.
17/06/2011
EffectofNumberofneuronsinhidden layer
Good results should be obtained if Number Goodresultsshouldbeobtained,ifNumber ofneuronsinhiddenlayershouldbegrater than2timesofnumberofinputneurons. Number of neurons in Hidden Layer NumberofneuronsinHiddenLayer [2XNo.ofinputNeurons+1]
10
17/06/2011
Case Studies
ThankU Th k U
11