Vous êtes sur la page 1sur 45

Langston, Cognitive Psychology

Perceptron Learning
How

does a perceptron acquire its knowledge? The question really is: How does a perceptron learn the appropriate weights?

Perceptron Learning
Remember
Taste Seeds Skin

our features:
Sweet = 1, Not_Sweet = 0 Edible = 1, Not_Edible = 0 Edible = 1, Not_Edible = 0

For output: Good_Fruit = 1 Not_Good_Fruit = 0

Perceptron Learning

Lets start with no knowledge:


Input

Taste

0.0
Output 0.0

Seeds

0.0
Skin

If > 0.4 then fire

Perceptron Learning

The weights are empty:


Input

Taste

0.0
Output 0.0

Seeds

0.0
Skin

If > 0.4 then fire

Perceptron Learning
To train the perceptron, we will show it each example and have it categorize each one. Since its starting with no knowledge, it is going to make mistakes. When it makes a mistake, we are going to adjust the weights to make that mistake less likely in the future.

Perceptron Learning

When we adjust the weights, were going to take relatively small steps to be sure we dont over-correct and create new problems.

Perceptron Learning

Im going to learn the category good fruit defined as anything that is sweet.
Good fruit = 1 Not good fruit = 0

Trained Perceptron
Show it a banana:
Input Taste

0.0
Output 0.0 .00
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Trained Perceptron
Show it a banana:
Input Taste

0.0
Output 0.0 .00
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Trained Perceptron
Show it a banana:
Input Taste

0.0
Output 0.0 .00
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Trained Perceptron
Show it a banana:
Input Taste

0.0
Output Teacher 1 0.0 .00
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning

In this case we have:


(1 X 0) = 0 + (1 X 0) = 0 + (0 X 0) = 0

It adds up to 0.0. Since that is less than the threshold (0.40), we responded no. Is that correct? No.

Perceptron Learning
Since we got it wrong, we know we need to change the weights. Well do that using the delta rule (delta for change). w = learning rate X (overall teacher overall output) X node output

Perceptron Learning

The three parts of that are:


Learning rate: We set that ourselves. I want it to

be large enough that learning happens in a reasonable amount of time, but small enough that I dont go too fast. Im picking 0.25. (overall teacher - overall output): The teacher knows the correct answer (e.g., that a banana should be a good fruit). In this case, the teacher says 1, the output is 0, so (1 - 0) = 1. node output: Thats what came out of the node whose weight were adjusting. For the first node, 1.

Perceptron Learning

To pull it together:
Learning rate: 0.25. (overall teacher - overall output): 1. node output: 1.

w = 0.25 X 1 X 1 = 0.25. Since its a w, its telling us how much to change the first weight. In this case, were adding 0.25 to it.

Perceptron Learning
Lets think about the delta rule: (overall teacher - overall output):

If we get the categorization right, (overall

teacher - overall output) will be zero (the right answer minus itself). In other words, if we get it right, we wont change any of the weights. As far as we know we have a good solution, why would we change it?

Perceptron Learning
Lets think about the delta rule: (overall teacher - overall output):

If we get the categorization wrong, (overall

teacher - overall output) will either be -1 or +1.


If we said yes when the answer was no, were too

high on the weights and we will get a (teacher - output) of -1 which will result in reducing the weights. If we said no when the answer was yes, were too low on the weights and this will cause them to be increased.

Perceptron Learning
Lets think about the delta rule: Node output:

If the node whose weight were adjusting sent in

a 0, then it didnt participate in making the decision. In that case, it shouldnt be adjusted. Multiplying by zero will make that happen. If the node whose weight were adjusting sent in a 1, then it did participate and we should change the weight (up or down as needed).

Perceptron Learning

How do we change the weights for banana?


Feature: Learning rate: (overall teacher overall output): 1 1 1 Node output: w

taste seeds skin

0.25 0.25 0.25

1 1 0

+0.25 +0.25 0

Perceptron Learning
Adjusting weight 1:
.25 X (1 0) X 1 = 0.25 Input Taste

0.0
Output Teacher 1 0.0 0
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning
Corrected weight 1:
Input Taste

0.25
Output Teacher 1 0.0 0
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning
Adjusting weight 2:
.25 X (1 0) X 1 = 0.25 Input Taste

0.25
Output Teacher 1 0.0 0
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning
Corrected weight 2:
Input Taste

0.25
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning
Adjusting weight 3:
.25 X (1 0) X 0 = 0.00 Input Taste

0.25
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning
Corrected weight 3:
Input Taste

0.25
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 0 0

Perceptron Learning
To continue training, we show it the next example, adjust the weights We will keep cycling through the examples until we go all the way through one time without making any changes to the weights. At that point, the concept is learned.

Perceptron Learning
Show it a pear:
Input Taste

0.25
Output Teacher 1 0.25 0.25
If > 0.4 then fire

Seeds

0.0
Skin 1 1

Perceptron Learning

How do we change the weights for pear?


Feature: Learning rate: (overall teacher overall output): 1 1 1 Node output: w

taste seeds skin

0.25 0.25 0.25

1 0 1

+0.25 0 +0.25

Perceptron Learning
Adjusting weight 1:
.25 X (1 0) X 1 = 0.25 Input Taste

0.25
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 1 1

Perceptron Learning
Corrected weight 1:
Input Taste

0.50
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 1 1

Perceptron Learning
Adjusting weight 2:
.25 X (1 0) X 0 = 0.00 Input Taste

0.50
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 1 1

Perceptron Learning
Corrected weight 2:
Input Taste

0.50
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 1 1

Perceptron Learning
Adjusting weight 3:
.25 X (1 0) X 1 = 0.25 Input Taste

0.50
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

0.0
Skin 1 1

Perceptron Learning
Corrected weight 3:
Input Taste

0.50
Output Teacher 1 0.25 0
If > 0.4 then fire

Seeds

025
Skin 0 0

Perceptron Learning
Here it is with the final weights:
Input Taste

0.50
Output 0.25

Seeds

0.25
Skin

If > 0.4 then fire

Perceptron Learning
Show it a lemon:
Input Taste

0.50
Output Teacher 0 0.25 0
If > 0.4 then fire

Seeds

0.25
Skin 0 0

Perceptron Learning

How do we change the weights for lemon?


Feature: Learning rate: (overall teacher overall output): 0 0 0 Node output: w

taste seeds skin

0.25 0.25 0.25

0 0 0

0 0 0

Perceptron Learning
Here it is with the adjusted weights:
Input Taste

0.50
Output 0.25

Seeds

0.25
Skin

If > 0.4 then fire

Perceptron Learning
Show it a strawberry:
Input Taste

0.50
Output Teacher 1 0.25 1
If > 0.4 then fire

Seeds

0.25
Skin 1 1

Perceptron Learning

How do we change the weights for strawberry?


Feature: Learning rate: (overall teacher overall output): 0 0 Node output: w

taste seeds

0.25 0.25

1 1

0 0

skin

0.25

Perceptron Learning
Here it is with the adjusted weights:
Input Taste

0.50
Output 0.25

Seeds

0.25
Skin

If > 0.4 then fire

Perceptron Learning
Show it a green apple:
Input Taste

0.50
Output Teacher 0 0.25 0.25
If > 0.4 then fire

Seeds

0.25
Skin 1 1

Perceptron Learning

If you keep going, you will see that this perceptron can correctly classify the examples that we have.

Vous aimerez peut-être aussi