Vous êtes sur la page 1sur 18

Learning

Learning is a process that results in relatively consistent change in behavior or behavior potential
and is based on experience. Following are the three critical parts of this definition:

1. A change in behavior or behavior potential


Suppose that my mother is teaching me how to make pancake. If I make a pancake by
myself, then it becomes clear that learning has taken place. That means learning is
apparent from improvements in our performance. To make it more specific, there is
change in behavior. However, our performance does not show everything that we have
learnt. Suppose I watch a movie in which a taxi driver kills its passengers. I watch it and get
back to my work. Now the situation is whenever I have to go out, I do not get into a taxi, if
it is very urgent and I do not have any other means to go out then I make sure that I am
not riding alone in that taxi. In this case, I have achieved a potential for behavior change.
2. A relatively consistent change
To qualify as learnt a change in behavior or behavior potential must be relatively
consistent over different occasions. Thus, once you learn to drive a car, you will probably
always be able to do so.
3. A process based on experience
Learning can take place only through experience. Experience includes taking in
information and making responses affect the environment. Learned behavior does not
include changes that come because of physical maturation, or those caused by illness or
fatigue or drugs.

Characteristics or features of learning process

1. Learning connotes change: learning is a change in behavior for better or worse. The
changes produced by learning are not always positive in nature.
2. It is a change that takes place through practice or experience. Changes that come about by
growth, drugs, illness, fatigue are not called learning
3. The direction of learning can be vertical and or horizontal. “Vertical learning” applies to
addition of knowledge to that which already is possessed in a particular area of knowledge,
the improvement of a skill in which some dexterity has been achieved or strengthening of
developing attitudes and thinking. “Horizontal learning” means that the learner is
widening his or her learning horizons, competence in new forms of skills, gaining new
interests, discovering new approaches to problem solving and developing different
attitudes towards newly experienced situations and conditions.
4. Learning is an active process
5. Learning is goal directed

Types of learning/ Theories of learning


1. Behaviorists focus on a basic kind of learning called conditioning which involves
associations between environmental stimuli and responses. Behaviorists have shown that
two types of conditioning: classical and operant can explain much of human behavior. But
for social cognitive theorists, learning includes not only changes in behavior but also
changes in our thoughts, expectations, and knowledge. According to these theorists,
cognition plays an important role in learning that includes Bandura’s Observational
learning and Kohler’s Insight learning.
a. Classical conditioning
Classical conditioning is defined as the learning in which an organism learns to
associate two stimuli such that one stimulus comes to elicit a response that originally
was elicited only by another stimuli. Russian physiologist Ivan Pavlov was studying
salivation in dogs as a part of a research program on digestion. One of his procedures
was to make a surgical opening in a dog’s cheek and insert a tube that conducted
saliva away from dog’s salivary gland so that the saliva could be measured. To
stimulate the reflexive flow of saliva, he placed meat powder or other food in the
dog’s mouth. During his study, one of his students noticed something different. After a
dog had been brought to the laboratory a number of times, it would start to salivate
before the food was placed in its mouth. The sight or smell of the food, the dish in
which the food was kept, even the sight of the person who delivered the food each
day and the sound of the person’s footsteps were enough to start the dog’s mouth
watering. These new salivary responses were not inborn, so they had to have been
acquired through experience.
Further study confirmed that observation made during his research on digestion. Dogs
have a natural reflex to salivate to food but not to tones. Yet when a tone or other
stimulus that ordinarily did not cause salivation was presented just before food
powder was put into a dog’s mouth, soon the sound of the tone also made the dog
salivate.
Basic principles of classical conditioning
i. Acquisition
It is defined as the process by which a neutral stimulus acquires the ability to
elicit a conditioned response through repeated pairings of an unconditioned
stimulus with the neutral stimulus. Suppose we want to condition a dog to
salivate to a tone, sounding the tone initially may cause the dog to look here
and there to locate from where the sound is coming but not to salivate. At this
time, the tone is a NEUTRAL STIMULUS because it does not elicit salivation. If,
however, we place food in the dog’s mouth, the dog will salivate. This
salivation response to food is reflexive i.e. it is what the dogs do by nature
because no learning is required to produce salivation on the sight of food. The
food is the unconditioned stimulus (UCS) that elicits a reflexive or innate
response without prior learning. Salivation here is an unconditioned response
(UCR) that is a reflexive or innate response that is elicited by an unconditioned
stimulus (US) without prior learning. Next, the tone and the food are paired-
each pairing is done called learning trial and the dog salivates. After several
learning trials, if the tone is presented by itself, the dog salivates even though
there is no food. The tone has now become a conditioned stimulus (CS) that
initially neutral stimulus that comes to elicit a conditioned response after
being associated with an unconditioned stimulus. Because the dog is now
salivating to the tone, salivation has become a conditioned response (CR) that
is elicited by a conditioned stimulus which occurs after the conditioned
stimulus is associated with an unconditioned stimulus.
Before Conditioning
Tone No salivation
(Neutral stimulus)
Food Salivation
(Unconditioned stimulus) (Unconditioned response)
UCS UCR
During Conditioning
Tone + Food Salivation
(Neutral stimulus) UCS UCR
After Conditioning
Tone Salivation
(Conditioned stimulus) (Conditioned response)
CS CR
Fig. The Classical Conditioning Process
During acquisition, a neutral stimulus must be paired multiple times with a
UCS to convert neutral stimulus into conditioned stimulus.
Initially psychologists believed that conditioning was determined primarily by
the number of neutral-unconditioned stimulus pairing but there are other
factors too. The sequence and time interval of the neutral-unconditioned
stimulus pairing also affects conditioning. There are mainly three types of
sequence:
 Forward conditioning
In this conditioning, neutral stimulus is always presented before the
presentation of unconditioned stimulus. There are two types of forward
conditioning.
 Delay conditioning
The neutral stimulus (tone) appears first and is still present when the
UCS (food) appears
 Trace conditioning
The neutral stimulus (tone) is stopped than afterwards the UCS (food)
is presented

In forward conditioning, it is often optimal for the neutral stimulus to


appear no more than 2 seconds before the UCS. Extremely short interval
i.e. less than 0.2 seconds rarely produce conditioning.

 Simultaneous conditioning
In this conditioning, neutral stimulus and UCS begin and end at the same
time
 Backward conditioning
In this conditioning, UCS is presented before neutral stimulus is presented.

Forward conditioning

Neutral stimulus Neutral stimulus

UCS UCS

Time Time

Delayed conditioning Trace conditioning

Neutral stimulus Neutral stimulus

UCS UCS

Time Time

Simultaneous conditioning Backward conditioning

Fig: Four variations of the neutral-UCS temporal (time) arrangements in classical conditioning
Apart from these factors, conditioning is faster when the intensity of either
the neutral or unconditioned stimulus increases during the learning trials.

ii. Extinction and spontaneous recovery


If after conditioning, the conditioned stimulus is repeatedly presented without
the unconditioned stimulus, the conditioned response eventually disappears
and extinction is said to have occurred. The reappearance of a learned
response after its apparent extinction is known as spontaneous recovery.

Strong Rest period

Acquisition Extinction Spontaneous

(Neutral stimulus+UCS) recovery

(CS alone)

(Strength

of CR)

Weak

Trials

Fig: Acquisition, Extinction, and Spontaneous Recovery in Classical Conditioning

The strength of CR increases during the acquisition phase as the neutral


stimulus and UCS are paired on each learning trial. During extinction, only CS is
presented and the strength of CR decreases and finally disappears. After a rest
period following extinction, presentation of CS elicits a weaker CR called
spontaneous recovery that extinguishes more quickly than before.
iii. Generalization and discrimination
Pavlov found that once a CR is acquired, the organism often responds not only
to the original CS but also to stimuli that are similar to it. The greater the
similarity, greater is the chance that a CR will occur. A dog that salivates to a
medium pitched tone is more likely to salivate to a new tone slightly different
in pitch than to a very low or high pitched tone. This is called stimulus
generalization. Stimuli similar to the initial CS elicit CR.
In classical conditioning, discrimination is demonstrated when CR occurs to
one stimulus but not to others. Simply, stimulus discrimination is defined as
the process of learning to respond to certain stimuli but not the others.
iv. Higher-order conditioning
A neutral stimulus becomes a CS after being paired with an already
established CS. Typically, a higher order CS produces a CR that is weaker and
extinguishes more rapidly than the original CR.

Before higher order conditioning

Black cloth No salivation

(Neutral stimulus)

Tone Salivation

(CS1) (CR)

During conditioning

Black cloth + Tone Salivation

(Neutral stimulus) (CS1) (CR)

After higher order conditioning

Black cloth Salivation

(CS2) (CR)

b. Operant conditioning
At about the same time that Pavlov was using classical conditioning to induce Russian
dogs to salivate to salivate to the sound of a bell, Edward L. Thorndike (1898) was
watching American cats trying to escape from puzzle boxes. Thorndike built a special
cage called a puzzle box that could be opened from the inside by pulling a string or
stepping on a lever. He placed a hungry cat inside the box. Food was placed outside so
that the animal had to learn how to open the box to get the food. The cat at first
scratched and pushed the bars, tried to dig through the floor and by chance it
eventually stepped on the lever and opened the door. The same cat was again placed
in it repeatedly for several trials. Overtime, the cat learned to press the lever soon
after the door was shut. From this, he concluded that with trial and error, the cat
gradually eliminated the responses that failed to open the door and became more
likely to perform the actions that worked. He called this process Instrumental Learning
because an organism’s behavior is instrumental in bringing about certain outcomes.
He also proposed the Law of effect in a given situation, a response followed by a
satisfying consequence will become more likely to occur and a response followed by
an annoying consequence will become less likely to occur.
B. F. Skinner embraced Thorndike’s view that environmental consequences exert a
powerful effect on behavior. He coined the term as operant conditioning. Skinner
(1938, 1953) defined operant conditioning as a type of learning in which behavior is
influenced by the consequences that follow it. Literally, operant means affecting the
environment, operating on it. What skinner implies were organism show different
responses. These responses affect the environment. And the kind of effect or
consequence that is produced determines whether the organism will show that
behavior again in the future or not.
To analyze behavior experimentally, Skinner designed a Skinner box, a special
chamber used to study operant conditioning experimentally on one wall. There is a
lever which is positioned above a small cup. When the lever is depressed, a food pellet
automatically drops into the cup. A hungry rat is put into the chamber, and as it moves
about, it accidentally presses the lever. A food falls into the cup and the rat eats it. It
was found that the rat pressed the bar more frequently over time.
There are mainly two types of consequences:
a. Reinforcement
With reinforcement, a response is strengthened by an outcome that follows it.
Typically, the term strengthened is operationally defined as an increase in
frequency of a response. The outcome that increases the frequency of a response
is called reinforcer.

Types of reinforcement

There are two types of reinforcement

a. Positive reinforcement
It is the reinforcement occurs when a response is strengthened by the
subsequent presentation of a stimulus. The stimulus follows and
strengthens the response is called a positive reinforce like food, attention,
praise, money, etc. There are two broad types of positive reinforcers:
i. Primary reinforcer
Primary reinforcers are stimuli such as food, water, light,
comfortable air temperature, that an organism naturally finds
reinforcing because they satisfy biological needs.
ii. Secondary reinforce
Secondary reinforcers are stimuli that acquire reinforcing
properties through their association with primary reinforcers.
Some examples of secondary reinforcers are money, praise, good
grades, awards, gold stars, etc.
b. Negative reinforcement
Negative reinforcement occurs when a response is strengthened by the
subsequent removal or avoidance of an aversive stimulus. The aversive
stimulus that is removed is called negative reinforce. For example, we
have a severe headache and we take aspirin to lessen the headache. In
future, whenever we have headache, we take aspirin. In this example, our
act of taking aspirin is increased because it removes the aversive stimulus
i.e. headache.
Another example, we put on sunglasses when the sunlight is very bright
and we put off the alarm when we hear the alarm sound out in the
morning.
b. Punishment
Punishment occurs when a response is weakened by outcomes that follow it. The
outcome that decreases the frequency of response is called punisher.
There are two types of punishments:
i. Positive punishment
A response is weakened by subsequent presentation of stimulus like
scolding a child for misbehaving. The stimulus that follows and weakens
the response is called as positive punisher. There are two types of positive
punishers:
a. Primary punisher
Pain, extreme heat or cold are inherently punishing and are therefore
known as primary punishers.
b. Secondary punisher
Punishers that are manmade and created according to the situational
demand are called secondary punishers like criticism, demerits, bad
grades, fines, etc.
ii. Negative punishment
A response is weakened by the subsequent removal of a stimulus like a
boy does not do his homework, his parents cuts his allowance, then the
boy starts to do his homework. Similarly, a girl hits her younger sister, her
mother stops giving her attention that lead her not to hit her sister.
Principles of operant conditioning
a. Extinction
In operant conditioning, extinction takes place when the reinforce that maintained
the response is removed or is no longer available.
b. Stimulus generalization and stimulus discrimination
In operant conditioning, generalization that an animal or person emits the same
response to similar stimuli while discrimination means a response is emitted in the
presence of a stimulus that is reinforced and not in the presence of an
unreinforced stimulus.
c. Shaping
The organism undergoing shaping receives a reward for each small step towards a
final goal i.e. the target response rather than only for the final response. At first,
actions even remotely resembling the target behavior termed as successive
approximations are followed by reward. Gradually, closer and closer
approximations of the final target behavior are required before the reward is
given.
d. Learning on schedule- Schedules of reinforcement
It refers to a rule stating which behavior will be reinforced. There are two basic
types of reinforcement schedules:
i. Continuous reinforcement schedule
In this schedule, we reinforce every desired response/behavior that an
organism shows or makes.
ii. Intermittent/partial reinforcement schedule
In this schedule, we do not reinforce every time the desired response is
being made. Partial reinforcement to learning which exhibit greater
resistance to extinction of the learning resulting from continuous
reinforcement. There are two types of intermittent/partial reinforcement
schedules:
 Interval schedule
Here, reinforcement is based on passage of time. There are two types
of interval schedule:
 Fixed interval schedule (FI)
This type of schedule requires the passage of specific amount of
time before reinforcement will be delivered to a response. No
response during the interval is reinforced. For example, getting a
pay cheque at the end of the week or after a month. This schedule
produces a scallop i.e. gradual increase in the rate of responding
with responding occurring at a high rate just before reinforcement
is available.
 Variable interval schedule (VI)
In this schedule, there are variations in the intervals after we
reinforce. For example, calling a friend and getting busy tone in
phone, we retry after a time space that may be of variable time.
There is a constant response rate because the organism never
knows when the reinforcer is scheduled and that it must emit a
response or it will lose the opportunity to be reinforced.
 Ratio schedule
 Fixed ratio schedule (FR)
Reinforcement is delivered after a fixed number of responses are
given. After a response is reinforced, no responding occurs for a
period of time and then the response occurs at a high and steady
rate until the next reinforcement is delivered termed as post
reinforcement pause (PRP). Length of such pause is directly
proportional to the size of pause. For example, factory workers
being paid on per piece rate.
The organism under the control of FR eventually learns when it
will be reinforced. It works steadily until it is reinforced. It knows it
will have to emit a certain number of responses before the next
reinforce is delivered. So it is as though it takes a little break after
each reinforcement before going back to work. PRP is flat because
time is moving by with no responses occurring.
 Variable ratio schedule (VR)
In this type of intermittent schedule, organism is being reinforced
after variable number of responses. For example, reward on a slot
machine, or a bingo. In this type of schedule, there is a steady
response.
Ratio Interval

Fixed
FR FI
Variable
VR VI
Table: 2X2 contingency table to illustrate different types of reinforcement schedules

2. Social learning theories


a. Insight learning
Process of mentally working through a problem until the sudden realization of a
solution occurs is called insight learning. We may call this moment of sudden insight as
“Aha” experience or phenomenon as described by G. Jones in 2003. Insight is defined
as the sudden recognition of relationship that leads to the solution of a problem or
problems. In 1920s, German psychologist Wolfgang Kohler carried out a number of
experiments with his chimpanzees. Typically, Kohler placed a chimpanzee in an
enclosed area with a desirable piece of fruit out of reach. To obtain the fruit, the
animal had to use a nearby object as a tool. Usually the chimp solved the problem in a
way that suggested that the animal had some insight. The experiment is as follows:
Kohler kept Sultan-the most intelligent chimpanzee inside a cage. He could not reach
the fruit which lied outside the cage by means of a short stick that was inside the cage.
There was a longer stick outside the cage. Sultan tried to reach the fruit with the
shorter stick but did not succeed. He tried many times and gave up. Suddenly, he
picked up the shorter stick, went up to the bar and pulled the longer stick and pulled
the fruit and ate it.
Kohler found that the solution was sudden rather than being the result of a gradual
trial and error process. And once it solved a problem, there after it solved the problem
with few irrelevant moves. In addition to that, he also found out that chimp readily
transferred what it had learnt to a novel situation. For example, in one situation,
Sultan was not caged but some bananas were placed too high for him to reach. To
solve the problem, he had to stack some boxes that were around him to climb on to
get the bananas.
There are therefore three critical aspects of insight learning:
i. Suddenness
ii. Availability
iii. Transferability

Now here, the question rises as how could the solution have come so suddenly? The
answer to this question could be that the organism forms a mental representation of
the problem, mentally reorganizes, and manipulates components of the problem
thereby coming up with new relationship among the components of the problem
which leads to the solution of the problem i.e. mental trial and error.

b. Observation learning
In this learning, we acquire new behavior by imitating behaviors we observe in others.
The person whose behavior is observed is called a Model. Observation learning is also
called social learning theory because we acquire much of our behavior by observing
and imitating others within a social context. Simply watching the behavior of others,
we can learn many behaviors without going through the tedious trial and error
process of gradually eliminating wrong responses and acquiring the right responses.
In 1963, Bandura and his colleagues did a study on observation learning. They did the
study on nursery- school children. These children were grouped into two. In a video,
one group of children saw an adult hitting, kicking, and verbally assaulting a Bobo doll.
While another group of children saw an adult loving and caring the Bobo doll. Later on,
when the children were made to play with the doll, the children who saw the adult
kick, hit the Bobo doll did the same. While the other group of children who saw the
adult loving the doll treated nicely.
According to Bandura (1986), there are four elements or steps of observational
learning:
i. Attention
In order to learn through observation, we have to pay attention to the
behavior being shown.
ii. Retention
In order to imitate the behavior, we have to remember what the model says
and did. Retention can be improved by mental rehearsal or by actual practice.
iii. Motor reproduction
We need to be able to convert these memories into appropriate actions. It is
called production. It depends on our physical abilities, our capacity to monitor
our own performance and adjust it until it matches that of the model. Apart
from that, practice makes the behavior smoother and more expert.
iv. Motivation and reinforcement
We may require a new skill or behavior through observation but we may not
perform that behavior until there is some motivation or incentive to do so. If
we are not in need of skill/behavior being shown by another person, we will
not pay attention to that behavior. In addition to that, reinforcement does
play role in observation learning. If we anticipate being reinforced for
imitating actions of a model, we may be more motivated to pay attention,
remember, and reproduce the behavior. Bandura identified following forms of
reinforcement that can encourage observational leaning:
 The observer may reproduce the behavior of model if he/she receives
direct reinforcement for imitating it.
 The observer may simply see others reinforced for a particular behavior
and then he/she increases his/her production of that behavior.

According to Baldwin and Baldwin, 1973; Bandura, 1977, a model’s behavior


will be most influential when:

 It is seen as having reinforcing consequences


 The model is perceived positively, liked, and respected
 There are perceived similarities between features and traits of model and
the observer
 The model’s behavior is visible and salient-it stands out as a clear figure
against the background of competing models. It is within the observer’s
range of competence to imitate the behavior
 The observer is rewarded for paying attention to the model’s behavior.
c. Trial and error learning
Trial and error learning was put forward by American psychologist Edward Lee
Thorndike. When no solution of a problem is available to an individual, he/she adopts
the method of trial and error. The individual first tries one solution and if it does not
help him/her, he/she rejects it. Then he/she tries another and goes on. In this way, the
individual eliminates errors or irrelevant responses which do not serve the purpose
and finally discover the correct solution.
In one of his famous experiment, he put a hungry cat in a cage called puzzle box. The
door of the box could be opened by correctly manipulation a latch and thus the cat
could come out. The food for the cat was kept outside the cage which acted as a
strong motive for the cat to come out. When placed in the puzzle box, the cat
becomes restless and moves randomly in the box. It tried to squeeze through every
opening, pulled the bars as the cat had a motive or purpose to come out of the box. In
making random movements and attempts to open the door, in one of the random
movement, the latch was manipulated accidentally and the door got opened. The cat
came out and ate the food. Again, the cat was put in the puzzle box. Again, the cat
made random movements and attempts to open the door and eat the food. But this
time, incorrect responses such as pulling bars, biting the bars gradually decreased and
the number of errors was less than previous trials.
Trial and learning states that when placed in a new situation, an individual makes
random movements, those of which are unsuccessful are eliminated and the
successful movements are fixed. As the trials go on increasing, the errors and time
taken go on decreasing.
Steps of trial and error learning
1. Need
2. Goal
3. Block or barrier
4. Random movements or responses or attempts
5. Chance successes
6. Selection of proper and correct movements and elimination of wrong and
incorrect responses
7. Repetition of successful movements or responses
8. Achievement of goals

How do we learn?

We learn by association, our mind naturally connects events that occur in sequence. Suppose, we
see and smell a cake in bakery shop, eat one, and find it tasty. The next time we see and smell
such cakes, that experience will lead us to expect that eating it will once again be tasty.

In associative learning, we learn that certain events occur together. Conditioning is a process of
learning associations. In classical conditioning, we learn to associate two stimuli such that one
stimulus comes to elicit a response that originally was elicited only by other stimulus. While in
operant conditioning, we learn to associate a response i.e. our behavior and its consequences and
thus to repeat acts followed by good results and avoid the ones followed by bad results.

But for socio-cognitive learning theorists, learning includes not only changes in behavior but also
changes in our thoughts, expectations and knowledge. According to these theorists, cognition
plays important role in learning. Bandura’s observation learning and Kohler’s insight learning
comes under socio-cognitive learning theory.

Behavior modification

Behavior modification is defined as the systematic application of principles of operant conditioning


for learning desired behavior. It is also defined as the field of psychology concerned with analyzing
and modifying human behavior.

Analyzing means identifying the functional relationship between environmental events and
particular behavior to understand the reasons for behavior or to determine why person behaves
as he/she does.
Modifying means developing and implementing procedures to help people change their behavior.
It involves altering environmental events so as to influence behavior.

Characters of behavior modification

1. They are designed to change behavior


Behavioral excesses and deficits are targets for change with behavior modification procedures.
Behavior excess is an undesirable behavior that person wants to decrease in frequency,
duration, or intensity. Behavior deficit is desirable target behavior that person wants to
increase in duration, frequency, or intensity.
2. The procedures are based on principles of operant conditioning.
3. Emphasis is on current environmental events
Behavior modification involves assessing and modifying current environmental events that are
functionally related to behavior. Human behavior is controlled by events in immediate
environment and the goal of behavior modification is to identify those events. Once identified,
they are altered to modify the behavior.
4. Measure of behavior change
There is emphasis on measuring the behavior before and after the intervention to determine
the behavior change resulting from the behavior modification procedures.

Behavior modification is used in many areas like health, school, home, organizations. The
application of behavior modification concepts in work setting is called organizational behavior
modification. It has been successfully used to improve productivity, attendance, punctuality, safe
work practices, customer services, and other important behaviors in a wide variety of kinds of
organizations such as banks, department stores, factories, hospitals, and construction sites. It can
be used to encourage learning of desired organizational behaviors as well as to discourage
undesired behaviors.

Typical organizational behavior modification (OB Mod) program follows five steps:

1. Identifying behaviors
Identifying behavior to be modified. The behavior should be observable, measurable, should
be relevant to job and organizational performance. And it should be critical. Critical behaviors
are those behaviors that make a significant impact on an employee’s job performance. These
are those 5 to 10 percent of behaviors that may account for up to 70 to 80 percent of each
employee’s performance.
2. Measuring the frequency of behavior
It is the identified behaviors repetitions. This provides baseline of the behavior we may
measure the behavior by doing direct observation, by questioning managers, supervisors,
team leaders, o r any other related personnel. Also we may get the data from archival data.
3. Analyzing
Analysis is done in A-B-C terms where A stands for antecedents i.e. the preceding events or
circumstances that acts as the prompt to B i.e. behaviors or responses made by an organism
and C i.e. consequences which is the response followed by a satisfying consequence will
become more likely to occur and a response followed by an annoying consequence will
become less likely to occur (according to the Law of Effect proposed by Edward Thorndike).
4. Developing and implementing intervention strategies
Intervention is done to modify the identified behavior. The goal of development and
implementation of interventions is to strengthen desirable behaviors and weaken undesirable
behaviors. Mainly, positive and negative reinforcements are used. But there arises
circumstances where punishments have to be used as well.
5. Evaluating performance improvement
We evaluate the effectiveness of the intervention strategy that we have used. We measure
the frequency of behavior to determine the effectiveness of the intervention strategy. If the
behavior has been successfully modified, then all that needs to be done at this step is to
maintain the intervention. If the behavior has not been modified, then we need to reconsider
the intervention methods and modify them accordingly and/or reconsider the behavior we
originally identified.

Research has suggested that OB Mod if appropriately used, it can be highly effective when it
comes to prompting desirable organizational behavior. For instance, research on OB Mod showed
it improved employee performance by 17 percent on average. Field experiment conducted by
Alexander Stajkovic and Fred Luthans in a division of a large organization that processes credit
card bills, it was found that OB Mod resulted in 37 percent increase in performance when the
reinforced behavior included financial incentives. When performance was positively reinforced by
simple supervisory feedback performance of employees was increased by 20 percent. When social
recognition and praise were used, performance increased by 24 percent.
Identify Behavior must be observable,
measurable, behavior change task related and critical to
task

Measure frequency We can measure frequency by

of behavior direct observation, use of archival data

Analyze antecedent Antecedents Behavior

and consequences Consequences

Intervene

No Evaluate for Yes Maintain intervention

performance improvements
Differences between classical conditioning and operant conditioning

Classical conditioning Operant conditioning


Classical conditioning is defined as the learning Operant conditioning is defined as the learning
in which an organism learns to associate two in which an organism learns to associate
stimuli such that one stimulus comes to elicit a between response and the consequences of
response that originally was elicited only by the that response and thus to repeat acts followed
other stimulus by god results and avoid acts followed by bad
results
It is focused on elicited behaviors: the It focuses on emitted behaviors in a given
conditioned responses is triggered involuntarily situations, the organism generates responses
almost like a reflex action by conditioned that are under its physical control
stimulus
Goal of classical conditioning is to create a new Goal of operant conditioning is to increase or
response to a neutral stimulus decrease the rate of some response
In classical conditioning, organism learns a In operant conditioning, an organism learns
predictive relationship between stimuli that performing or emitting some behavior is
followed by some consequences which in turn
increases/decreases the chances of performing
that behavior again
In classical conditioning, the conditioned In operant conditioning, the reinforcing or
stimulus occurs before conditioned response punishing consequences occur after response is
and triggers it made

Transfer of learning

Vous aimerez peut-être aussi