Vous êtes sur la page 1sur 31

Ads by Google Behavior Modification Operant Conditioning Child Behavior ADHD Definition

On this page

• Dictionary
• Sci-Tech Encycl.
• Medical
• WordNet
• Wikipedia
• Copyrights

Library

• Animal Life
• Business & Finance
• Cars & Vehicles
• Entertainment & Arts
• Food & Cooking
• Health
• History, Politics, Society
• Home & Garden
• Law & Legal Issues
• Literature & Language
• Miscellaneous
• Religion & Spirituality
• Science
• Shopping
• Sports
• Technology
• Travel
• Q&A

ADVERTISEMENT

operant conditioning
Dictionary:
operant
conditioning
Sponsored Links
1000 training ideas
Best-selling training tips & much more for trainers and coaches
www.koganpage.com
Home > Library > Literature & Language > Dictionary

n. Psychology
A process of behavior modification in which the likelihood of a specific behavior is
increased or decreased through positive or negative reinforcement each time the behavior
is exhibited, so that the subject comes to associate the pleasure or displeasure of the
reinforcement with the behavior.

English▼


Search unanswered questions...

• Browse: Unanswered questions | Most-recent questions | Reference library

Enter question or phrase...


Search: All sources Community Q&A Reference topics

• Browse: Unanswered questions | New questions | New answers | Reference library


Related Videos:

operant
conditio
ning
Top

Sci-Tech
Encyclopedia:

Instrumental
condition
ing
Top
Home > Library > Science > Sci-Tech Encyclopedia

Learning based upon the consequences of behavior. For example, a rat may learn to press
a lever when this action produces food. Instrumental or operant behavior is the behavior
by which an organism changes its environment. The particular instances of behavior that
produce consequences are called responses. Classes of responses having characteristic
consequences are called operant classes; responses in an operant class operate on (act
upon) the environment. For example, a rat's lever-press responses may include pressing
with the left paw or with the right paw, sitting on the lever, or other activities, but all of
these taken together constitute the operant class called lever pressing.

Consequences of responding that produce increases in behavior are called reinforcers. For
example, if an animal has not eaten recently, food is likely to reinforce many of its
responses. Whether a particular event will reinforce a response or not depends on the
relation between the response and the behavior for which its consequences provide an
opportunity. Some consequences of responding called punishers produce decreases in
behavior. The properties of punishment are similar to those of reinforcement, except for
the difference in direction.

The consequences of its behavior are perhaps the most important properties of the world
about which an organism can learn, but few consequences are independent of other
circumstances. Organisms learn that their responses have one consequence in one setting
and different consequences in another. Stimuli that set the occasion on which responses
have different consequences are called discriminative stimuli (these stimuli do not elicit
responses; their functions are different from those that are simply followed by other
stimuli). When organisms learn that responses have consequences in the presence of one
but not another stimulus, their responses are said to be under stimulus control. For
example, if a rat's lever presses produce food when a light is on but not when it is off and
the rat comes to press only when the light is on, the rat's presses are said to be under the
stimulus control of the light. See also Conditioned reflex.

Medical Dictionary:

operant
conditio
ning
Top
Home > Library > Health > Medical Dictionary

n.

A process of behavior modification in which a subject is encouraged to behave in a


desired manner through positive or negative reinforcement, so that the subject comes to
associate the pleasure or displeasure of the reinforcement with the behavior.

WordNet:

operant
conditio
ning
Top
Home > Library > Literature & Language > WordNet
Note: click on a word meaning below to see its connections and related words.

The noun has one meaning:

Meaning #1: conditioning in which an operant response is brought under stimulus control
by virtue of presenting reinforcement contingent upon the occurrence of the operant
response

Wikipedia:

Operant
conditio
ning
Top
Home > Library > Miscellaneous > Wikipedia
"Operant" redirects here. For the definition of the word "operant", see
Wiktionary:operant.
It has been suggested that Mutual operant conditioning be merged into this article
or section. (Discuss)

Operant conditioning is the use of consequences to modify the occurrence and form of
behavior. Operant conditioning is distinguished from classical conditioning (also called
respondent conditioning, or Pavlovian conditioning) in that operant conditioning deals
with the modification of "voluntary behavior" or operant behavior. Operant behavior
"operates" on the environment and is maintained by its consequences, while classical
conditioning deals with the conditioning of respondent behaviors which are elicited by
antecedent conditions. Behaviors conditioned via a classical conditioning procedure are
not maintained by consequences.[1]

Contents [hide]

• 1 Reinforcement, punishment, and extinction


• 2 Thorndike's law of effect
• 3 Biological correlates of operant conditioning
• 4 Factors that alter the effectiveness of consequences
• 5 Operant variability
• 6 Avoidance learning
• 7 Two-process theory of avoidance
• 8 Verbal Behavior
• 9 Four term contingency
• 10 Operant hoarding
• 11 An alternative to the Law of Effect
• 12 See also
• 13 References

• 14 External links

Reinforcement, punishment, and extinction


Reinforcement and punishment, the core tools of operant conditioning, are either positive
(delivered following a response), or negative (withdrawn following a response). This
creates a total of four basic consequences, with the addition of a fifth procedure known as
extinction (i.e. no change in consequences following a response)

It's important to note that organisms are not spoken of as being reinforced, punished, or
extinguished; it is the response that is reinforced, punished, or extinguished. Additionally,
reinforcement, punishment, and extinction are not terms whose use is restricted to the
laboratory. Naturally occurring consequences can also be said to reinforce, punish, or
extinguish behavior and are not always delivered by people.
• Reinforcement is a consequence that causes a behavior to occur with greater
frequency.
• Punishment is a consequence that causes a behavior to occur with less frequency.
• Extinction is the lack of any consequence following a behavior. When a behavior
is inconsequential, producing neither favorable nor unfavorable consequences, it
will occur with less frequency. When a previously reinforced behavior is no
longer reinforced with either positive or negative reinforcement, it leads to a
decline in the response.

Four contexts of operant conditioning: Here the terms "positive" and "negative" are not
used in their popular sense, but rather: "positive" refers to addition, and "negative" refers
to subtraction.

What is added or subtracted may be either reinforcement or punishment. Hence positive


punishment is sometimes a confusing term, as it denotes the addition of a stimulus or
increase in the intensity of a stimulus that is aversive (such as spanking or an electric
shock) The four procedures are:

1. Positive reinforcement (Reinforcement) occurs when a behavior (response) is


followed by a stimulus that is rewarding, increasing the frequency of that
behavior. In the Skinner box experiment, a stimulus such as food or sugar solution
can be delivered when the rat engages in a target behavior, such as pressing a
lever.
2. Negative reinforcement (Escape) occurs when a behavior (response) is followed
by the removal of an unpleasant stimulus, thereby increasing that behavior's
frequency. In the Skinner box experiment, negative reinforcement can be a loud
noise continuously sounding inside the rat's cage until it engages in the target
behavior, such as pressing a lever, upon which the loud noise is removed.
3. Positive punishment (Punishment) (also called "Punishment by contingent
stimulation") occurs when a behavior (response) is followed by a stimulus, such
as introducing a shock or loud noise, resulting in a decrease in that behavior.
4. Negative punishment (Penalty) (also called "Punishment by contingent
withdrawal") occurs when a behavior (response) is followed by the removal of a
stimulus, such as taking away a child's toy following an undesired behavior,
resulting in a decrease in that behavior.

Also:

• Avoidance learning is a type of learning in which a certain behavior results in the


cessation of an aversive stimulus. For example, performing the behavior of
shielding one's eyes when in the sunlight (or going indoors) will help avoid the
aversive stimulation of having light in one's eyes.
• Extinction occurs when a behavior (response) that had previously been reinforced
is no longer effective. In the Skinner box experiment, this is the rat pushing the
lever and being rewarded with a food pellet several times, and then pushing the
lever again and never receiving a food pellet again. Eventually the rat would cease
pushing the lever.
• Noncontingent reinforcement refers to delivery of reinforcing stimuli regardless
of the organism's (aberrant) behavior. The idea is that the target behavior
decreases because it is no longer necessary to receive the reinforcement. This
typically entails time-based delivery of stimuli identified as maintaining aberrant
behavior, which serves to decrease the rate of the target behavior.[2] As no
measured behavior is identified as being strengthened, there is controversy
surrounding the use of the term noncontingent "reinforcement".[3]
• Shaping is a form of operant conditioning in which the increasingly accurate
approximations of a desired response are reinforced.[4]
• Chaining is an instructional procedure which involves reinforcing individual
responses occurring in a sequence to form a complex behavior.[4]

Thorndike's law of effect


Main article: Law of effect

Operant conditioning, sometimes called instrumental conditioning or instrumental


learning, was first extensively studied by Edward L. Thorndike (1874–1949), who
observed the behavior of cats trying to escape from home-made puzzle boxes.[5] When
first constrained in the boxes, the cats took a long time to escape. With experience,
ineffective responses occurred less frequently and successful responses occurred more
frequently, enabling the cats to escape in less time over successive trials. In his Law of
Effect, Thorndike theorized that successful responses, those producing satisfying
consequences, were "stamped in" by the experience and thus occurred more frequently.
Unsuccessful responses, those producing annoying consequences, were stamped out and
subsequently occurred less frequently. In short, some consequences strengthened
behavior and some consequences weakened behavior. Thorndike produced the first
known learning curves through this procedure. B.F. Skinner (1904–1990) formulated a
more detailed analysis of operant conditioning based on reinforcement, punishment, and
extinction. Following the ideas of Ernst Mach, Skinner rejected Thorndike's mediating
structures required by "satisfaction" and constructed a new conceptualization of behavior
without any such references. So, while experimenting with some homemade feeding
mechanisms, Skinner invented the operant conditioning chamber which allowed him to
measure rate of response as a key dependent variable using a cumulative record of lever
presses or key pecks.[6]

Biological correlates of operant conditioning


The first scientific studies identifying neurons that responded in ways that suggested they
encode for conditioned stimuli came from work by Mahlon deLong[7][8] and by R.T.
"Rusty" Richardson and deLong.[8] They showed that nucleus basalis neurons, which
release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a
conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These
neurons are equally active for positive and negative reinforcers, and have been
demonstrated to cause plasticity in many cortical regions.[9] Evidence also exists that
dopamine is activated at similar times. There is considerable evidence that dopamine
participates in both reinforcement and aversive learning.[10] Dopamine pathways project
much more densely onto frontal cortex regions. Cholinergic projections, in contrast, are
dense even in the posterior cortical regions like the primary visual cortex. A study of
patients with Parkinson's disease, a condition attributed to the insufficient action of
dopamine, further illustrates the role of dopamine in positive reinforcement.[11] It showed
that while off their medication, patients learned more readily with aversive consequences
than with positive reinforcement. Patients who were on their medication showed the
opposite to be the case, positive reinforcement proving to be the more effective form of
learning when the action of dopamine is high.

Factors that alter the effectiveness of consequences


When using consequences to modify a response, the effectiveness of a consequence can
be increased or decreased by various factors. These factors can apply to either reinforcing
or punishing consequences.

1. Satiation/Deprivation: The effectiveness of a consequence will be reduced if the


individual's "appetite" for that source of stimulation has been satisfied. Inversely,
the effectiveness of a consequence will increase as the individual becomes
deprived of that stimulus. If someone is not hungry, food will not be an effective
reinforcer for behavior. Satiation is generally only a potential problem with
primary reinforcers, those that do not need to be learned such as food and water.
2. Immediacy: After a response, how immediately a consequence is then felt
determines the effectiveness of the consequence. More immediate feedback will
be more effective than less immediate feedback. If someone's license plate is
caught by a traffic camera for speeding and they receive a speeding ticket in the
mail a week later, this consequence will not be very effective against speeding.
But if someone is speeding and is caught in the act by an officer who pulls them
over, then their speeding behavior is more likely to be affected.
3. Contingency: If a consequence does not contingently (reliably, or consistently)
follow the target response, its effectiveness upon the response is reduced. But if a
consequence follows the response consistently after successive instances, its
ability to modify the response is increased. The schedule of reinforcement, when
consistent, leads to faster learning. When the schedule is variable the learning is
slower. Extinction is more difficult when learning occurred during intermittent
reinforcement and more easily extinguished when learning occurred during a
highly consistent schedule.
4. Size: This is a "cost-benefit" determinant of whether a consequence will be
effective. If the size, or amount, of the consequence is large enough to be worth
the effort, the consequence will be more effective upon the behavior. An
unusually large lottery jackpot, for example, might be enough to get someone to
buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery
jackpot is small, the same person might not feel it to be worth the effort of driving
out and finding a place to buy a ticket. In this example, it's also useful to note that
"effort" is a punishing consequence. How these opposing expected consequences
(reinforcing and punishing) balance out will determine whether the behavior is
performed or not.

Most of these factors exist for biological reasons. The biological purpose of the Principle
of Satiation is to maintain the organism's homeostasis. When an organism has been
deprived of sugar, for example, the effectiveness of the taste of sugar as a reinforcer is
high. However, as the organism reaches or exceeds their optimum blood-sugar levels, the
taste of sugar becomes less effective, perhaps even aversive.

The principles of Immediacy and Contingency exist for neurochemical reasons. When an
organism experiences a reinforcing stimulus, dopamine pathways in the brain are
activated. This network of pathways "releases a short pulse of dopamine onto many
dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic
neurons."[12] This results in the plasticity of these synapses allowing recently activated
synapses to increase their sensitivity to efferent signals, hence increasing the probability
of occurrence for the recent responses preceding the reinforcement. These responses are,
statistically, the most likely to have been the behavior responsible for successfully
achieving reinforcement. But when the application of reinforcement is either less
immediate or less contingent (less consistent), the ability of dopamine to act upon the
appropriate synapses is reduced.

Operant variability
Operant variability is what allows a response to adapt to new situations. Operant behavior
is distinguished from reflexes in that its response topography (the form of the response)
is subject to slight variations from one performance to another. These slight variations
can include small differences in the specific motions involved, differences in the amount
of force applied, and small changes in the timing of the response. If a subject's history of
reinforcement is consistent, such variations will remain stable because the same
successful variations are more likely to be reinforced than less successful variations.
However, behavioral variability can also be altered when subjected to certain controlling
variables.[13]

Avoidance learning
Avoidance training belongs to negative reinforcement schedules. The subject learns that a
certain response will result in the termination or prevention of an aversive stimulus.
There are two kinds of commonly used experimental settings: discriminated and free-
operant avoidance learning.

Discriminated avoidance learning


In discriminated avoidance learning, a novel stimulus such as a light or a tone is
followed by an aversive stimulus such as a shock (CS-US, similar to classical
conditioning). During the first trials (called escape-trials) the animal usually
experiences both the CS (Conditioned Stimulus) and the US (Unconditioned
Stimulus), showing the operant response to terminate the aversive US. By the
time, the animal will learn to perform the response already during the presentation
of the CS thus preventing the aversive US from occurring. Such trials are called
avoidance trials.

Free-operant avoidance learning

In this experimental session, no discrete stimulus is used to signal the occurrence


of the aversive stimulus. Rather, the aversive stimulus (mostly shocks) are
presented without explicit warning stimuli. There are two crucial time intervals
determining the rate of avoidance learning. This first one is called the S-S-interval
(shock-shock-interval). This is the amount of time which passes during successive
presentations of the shock (unless the operant response is performed). The other
one is called the R-S-interval (response-shock-interval) which specifies the length
of the time interval following an operant response during which no shocks will be
delivered. Note that each time the organism performs the operant response, the R-
S-interval without shocks begins anew.

Two-process theory of avoidance


This theory was originally established to explain learning in discriminated avoidance
learning. It assumes two processes to take place. a) Classical conditioning of fear. During
the first trials of the training, the organism experiences both CS and aversive US (escape-
trials). The theory assumed that during those trials classical conditioning takes place by
pairing the CS with the US. Because of the aversive nature of the US the CS is supposed
to elicit a conditioned emotional reaction (CER) - fear. In classical conditioning,
presenting a CS conditioned with an aversive US disrupts the organism's ongoing
behavior. b) Reinforcement of the operant response by fear-reduction. Because during the
first process, the CS signaling the aversive US has itself become aversive by eliciting fear
in the organism, reducing this unpleasant emotional reaction serves to motivate the
operant response. The organism learns to make the response during the US, thus
terminating the aversive internal reaction elicited by the CS. An important aspect of this
theory is that the term "Avoidance" does not really describe what the organism is doing.
It does not "avoid" the aversive US in the sense of anticipating it. Rather the organism
escapes an aversive internal state, caused by the CS.

Verbal Behavior
Main article: Verbal Behavior (book)
In 1957, Skinner published Verbal Behavior, a theoretical extension of the work he had
pioneered since 1938. This work extended the theory of operant conditioning to human
behavior previously assigned to the areas of language, linguistics and other areas. Verbal
Behavior is the logical extension of Skinner's ideas, in which he introduced new
functional relationship categories such as intraverbals, autoclitics, mands, tacts and the
controlling relationship of the audience. All of these relationships were based on operant
conditioning and relied on no new mechanisms despite the introduction of new functional
categories.

Four term contingency


Applied behavior analysis, which is the name of the discipline directly descended from
Skinner's work, holds that behavior is explained in four terms: conditional stimulus (SC),
a discriminative stimulus (Sd), a response (R), and a reinforcing stimulus (Srein or Sr for
reinforcers, sometimes Save for aversive stimuli).[14]

Operant hoarding
Operant hoarding is a referring to the choice made by a rat, on a compound schedule
called a multiple schedule, that maximizes its rate of reinforcement in an operant
conditioning context. More specifically, rats were shown to have allowed food pellets to
accumulate in a food tray by continuing to press a lever on a continuous reinforcement
schedule instead of retrieving those pellets. Retrieval of the pellets always instituted a
one-minute period of extinction during which no additional food pellets were available
but those that had been accumulated earlier could be consumed. This finding appears to
contradict the usual finding that rats behave impulsively in situations in which there is a
choice between a smaller food object right away and a larger food object after some
delay. See schedules of reinforcement.[15]

An alternative to the Law of Effect


However, an alternative perspective has been proposed by R. Allen and Beatrix Gardner.
[16][17]
Under this idea, which they called "feedforward", animals learn during operant
conditioning by simple pairing of stimuli, rather than by the consequences of their
actions. Skinner asserted that a rat or pigeon would only manipulate a lever if rewarded
for the action, a process he called shaping (reward for approaching then manipulating a
lever).[18] However, in order to prove the necessity of reward (reinforcement) in lever
pressing, a control condition where food is delivered without regard to behavior must also
be conducted. Skinner never published this control group. Only much later was it found
that rats and pigeons do indeed learn to manipulate a lever when food comes irrespective
of behavior. This phenomenon is known as autoshaping.[19] Autoshaping demonstrates
that consequence of action is not necessary in an operant conditioning chamber, and it
contradicts the Law of Effect. Further experimentation has shown that rats naturally
handle small objects, such as a lever, when food is present.[20] Rats seem to insist on
handling the lever when free food is available (contra-freeloading)[21][22] and even when
pressing the lever leads to less food (omission training).[23][24] Whenever food is presented,
rats handle the lever, regardless if lever pressing leads to more food. Therefore, handling
a lever is a natural behavior that rats do as preparatory feeding activity, and in turn, lever
pressing cannot logically be used as evidence for reward or reinforcement to occur. In the
absence of evidence for reinforcement during Operant conditioning, learning which
occurs during Operant experiments is actually only Pavlovian (Classical) conditioning.
The dichotomy between Pavlovian and Operant conditioning is therefore an inappropriate
separation.

See also
• Applied behavior analysis, the • Educational Psychology portal
application of operant technology
behaviorism • Experimental • Reinforcement
• Behaviorism, the family of analysis of learning
philosophies behind operant behavior • Reward system
conditioning • Exposure therapy • Sensitization
• Cognitivism (psychology), a • Habituation • Social
competing theory that invokes • Matching law conditioning
internal mechanisms without • Negative
reference to behavior (positive) • Spontaneous
contrast effect recovery
• Educational psychology
• Premack
principle

References
1. ^ Domjan, Michael, Ed., The Principles of Learning and Behavior, Fifth Edition,
Belmont, CA: Thomson/Wadsworth, 2003
2. ^ Tucker, M., Sigafoos, J., & Bushell, H. (1998). Use of noncontingent
reinforcement in the treatment of challenging behavior. Behavior Modification,
22, 529–547.
3. ^ Poling, A., & Normand, M. (1999). Noncontingent reinforcement: an
inappropriate description of time-based schedules that reduce behavior. Journal of
Applied Behavior Analysis, 32, 237-238.
4. ^ a b http://www.bbbautism.com/aba_shaping_and_chaining.htm
5. ^ Thorndike, E. L. (1901). Animal intelligence: An experimental study of the
associative processes in animals. Psychological Review Monograph Supplement,
2, 1-109.
6. ^ Mecca Chiesa (2004) Radical Behaviorism: the philosophy and the science
7. ^ "Activity of pallidal neurons during movement", M. R. DeLong, J.
Neurophysiol., 34:414-27, 1971
8. ^ a b Richardson RT, DeLong MR (1991): Electrophysiological studies of the
function of the nucleus basalis in primates. In Napier TC, Kalivas P, Hamin I
(eds), The Basal Forebrain: Anatomy to Function (Advances in Experimental
Medicine and Biology, vol. 295. New York, Plenum, pp 232-252
9. ^ PNAS 93:11219-24 1996, Science 279:1714-8 1998
10. ^ Neuron 63:244-253, 2009, Frontiers in Behavioral Neuroscience, 3: Article 13,
2009
11. ^ Michael J. Frank, Lauren C. Seeberger, and Randall C. O'Reilly (2004) "By
Carrot or by Stick: Cognitive Reinforcement Learning in Parkinsonism," Science
4, November 2004
12. ^ Schultz, Wolfram (1998). Predictive Reward Signal of Dopamine Neurons. The
Journal of Neurophysiology, 80(1), 1-27.
13. ^ Neuringer, A. (2002). Operant variability: Evidence, functions, and theory.
Psychonometric Bulletin & Review, 9(4), 672-705.
14. ^ Pierce & Cheney (2004) Behavior Analysis and Learning
15. ^ Cole, M.R. (1990). Operant hoarding: A new paradigm for the study of self-
control. Journal of the Experimental Analysis of Behavior, 53, 247-262.
16. ^ Gardner, R. A., & Gardner, B. T. (1988). Feedforward vs feedbackward: An
ethological alternative to the law of effect. Behavioral and Brain Sciences.
11:429-447.
17. ^ Gardner, R. A. & Gardner, B. T. (1998). The structure of learning from sign
stimuli to sign language. Mahwah NJ: Lawrence Erlbaum Associates.
18. ^ Skinner, B. F. (1953). Science and human behavior. Oxford, England:
Macmillan.
19. ^ Brown, P., & Jenkins, H. M. (1968). Autoshaping of the pigeon's key-peck. J.
Exp. Anal. Behav. 11:1-8.
20. ^ Timberlake, W. (1983). Rats’ responses to a moving object related to food or
water: A behavior-systems analysis. Animal Learning & Behavior. 11(3):309-
320.
21. ^ Jensen, G. D. (1963). Preference for bar pressing over 'freeloading' as a function
of number of rewarded presses. Journal of Experimental Psychology. 65:451-454.
22. ^ Neuringer, A.J. (1969). Animals respond for food in the presence of free food.
Science. 166:399-401.
23. ^ Williams, D.R. and Williams, H. (1969). Auto-maintenance in the pigeon:
sustained pecking despite contingent non-reinforcement. J. Exper. Analys. of
Behav. 12:511-520.
24. ^ Peden, B. F., Brown, M. P., & Hearst, E. (1977). Persistent approaches to a
signal for food despite food omission for approaching. Journal of Experimental
Psychology: Animal Behavior Processes. 3(4):377-399.

External links
• Behavioural Processes
• Journal of Applied Behavior Analysis
• Journal of the Experimental Analysis of Behavior
• Negative reinforcement
• Scholarpedia Operant conditioning
• scienceofbehavior.com
• Society for Quantitative Analysis of Behavior[1]

[hide]
v•d•e
Learning

Simple
non-
Habituation · Sensitization
associative
learning

Associative Operant conditioning · Classical conditioning · Imprinting · Observational


learning learning

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have
been reviewed by professional editors (see full disclaimer)
Donate to Wikimedia

Related topics:
two-process theory
operant technique
reinforcer

Related answers:
What is race condition in operating systems? Read answer...
What are the similarities between classical conditioning and operant conditioning?
Read answer...
In which condition zener diode operates? Read answer...

Help us answer these:


What is conditional operator?
What behaviors are learned in operant conditioning?
Similarities of classical conditioning and operant conditioning?

Post a question - any question - to the community: WikiAnswers

Copyrights:
Dictionary. The American Heritage® Dictionary of
the English Language, Fourth Edition Copyright ©
2007, 2000 by Houghton Mifflin Company. Updated
in 2009. Published by Houghton Mifflin Company.
All rights reserved. Read more
Sci-Tech Encyclopedia. McGraw-Hill Encyclopedia
of Science and Technology. Copyright © 2005 by
The McGraw-Hill Companies, Inc. All rights
reserved. Read more
Medical Dictionary. The American Heritage®
Stedman's Medical Dictionary Copyright © 2002,
2001, 1995 by Houghton Mifflin Company. Read
more
WordNet. WordNet 1.7.1 Copyright © 2001 by
Princeton University. All rights reserved. Read more
Wikipedia. This article is licensed under the Creative
Commons Attribution/Share-Alike License. It uses
material from the Wikipedia article Operant
conditioning. Read more

Site

• Help | Sitemap
• ReferenceAnswers
• WikiAnswers
• VideoAnswers

Company

• About | Jobs
• Press Room
• Blog
• Investor Relations

Legal

• Terms of Use
• Privacy Policy
• IP Issues
• Disclaimer

Webmasters

• AnswerTips
• Widget Gallery
• Badges
• More…

Downloads

• 1-Click Answers
• Browser Toolbar
• More…

Updates

• Email
• Watchlist
• RSS
• What's New

International Sites English Deutsch Español Français Italiano Tagalog

Copyright © 2010 Answers Corporation

Ads by Google Behavior Modification Operant Conditioning Child Behavior ADHD Definition

On this page

• Dictionary
• Sci-Tech Encycl.
• Medical
• WordNet
• Wikipedia
• Copyrights

Library

• Animal Life
• Business & Finance
• Cars & Vehicles
• Entertainment & Arts
• Food & Cooking
• Health
• History, Politics, Society
• Home & Garden
• Law & Legal Issues
• Literature & Language
• Miscellaneous
• Religion & Spirituality
• Science
• Shopping
• Sports
• Technology
• Travel
• Q&A

ADVERTISEMENT

operant conditioning

Dictionary: operant conditioning


Sponsored Links
1000 training ideas
Best-selling training tips & much more for trainers and coaches
www.koganpage.com
Home > Library > Literature & Language > Dictionary

n. Psychology
A process of behavior modification in which the likelihood of a specific behavior is
increased or decreased through positive or negative reinforcement each time the behavior
is exhibited, so that the subject comes to associate the pleasure or displeasure of the
reinforcement with the behavior.

English▼


Search unanswered questions...

• Browse: Unanswered questions | Most-recent questions | Reference library

Enter question or phrase...


Search: All sources Community Q&A Reference topics

• Browse: Unanswered questions | New questions | New answers | Reference library

Related Videos:

operant conditioning
Top

Sci-Tech Encyclopedia:

Instrumental conditioning
Top
Home > Library > Science > Sci-Tech Encyclopedia

Learning based upon the consequences of behavior. For example, a rat may learn to press
a lever when this action produces food. Instrumental or operant behavior is the behavior
by which an organism changes its environment. The particular instances of behavior that
produce consequences are called responses. Classes of responses having characteristic
consequences are called operant classes; responses in an operant class operate on (act
upon) the environment. For example, a rat's lever-press responses may include pressing
with the left paw or with the right paw, sitting on the lever, or other activities, but all of
these taken together constitute the operant class called lever pressing.

Consequences of responding that produce increases in behavior are called reinforcers. For
example, if an animal has not eaten recently, food is likely to reinforce many of its
responses. Whether a particular event will reinforce a response or not depends on the
relation between the response and the behavior for which its consequences provide an
opportunity. Some consequences of responding called punishers produce decreases in
behavior. The properties of punishment are similar to those of reinforcement, except for
the difference in direction.

The consequences of its behavior are perhaps the most important properties of the world
about which an organism can learn, but few consequences are independent of other
circumstances. Organisms learn that their responses have one consequence in one setting
and different consequences in another. Stimuli that set the occasion on which responses
have different consequences are called discriminative stimuli (these stimuli do not elicit
responses; their functions are different from those that are simply followed by other
stimuli). When organisms learn that responses have consequences in the presence of one
but not another stimulus, their responses are said to be under stimulus control. For
example, if a rat's lever presses produce food when a light is on but not when it is off and
the rat comes to press only when the light is on, the rat's presses are said to be under the
stimulus control of the light. See also Conditioned reflex.

Medical Dictionary:

operant conditioning
Top
Home > Library > Health > Medical Dictionary

n.

A process of behavior modification in which a subject is encouraged to behave in a


desired manner through positive or negative reinforcement, so that the subject comes to
associate the pleasure or displeasure of the reinforcement with the behavior.

WordNet:

operant conditioning
Top
Home > Library > Literature & Language > WordNet
Note: click on a word meaning below to see its connections and related words.

The noun has one meaning:

Meaning #1: conditioning in which an operant response is brought under stimulus control
by virtue of presenting reinforcement contingent upon the occurrence of the operant
response

Wikipedia:
Operant conditioning
Top
Home > Library > Miscellaneous > Wikipedia
"Operant" redirects here. For the definition of the word "operant", see
Wiktionary:operant.
It has been suggested that Mutual operant conditioning be merged into this article
or section. (Discuss)

Operant conditioning is the use of consequences to modify the occurrence and form of
behavior. Operant conditioning is distinguished from classical conditioning (also called
respondent conditioning, or Pavlovian conditioning) in that operant conditioning deals
with the modification of "voluntary behavior" or operant behavior. Operant behavior
"operates" on the environment and is maintained by its consequences, while classical
conditioning deals with the conditioning of respondent behaviors which are elicited by
antecedent conditions. Behaviors conditioned via a classical conditioning procedure are
not maintained by consequences.[1]

Contents [hide]

• 1 Reinforcement, punishment, and extinction


• 2 Thorndike's law of effect
• 3 Biological correlates of operant conditioning
• 4 Factors that alter the effectiveness of consequences
• 5 Operant variability
• 6 Avoidance learning
• 7 Two-process theory of avoidance
• 8 Verbal Behavior
• 9 Four term contingency
• 10 Operant hoarding
• 11 An alternative to the Law of Effect
• 12 See also
• 13 References

• 14 External links

Reinforcement, punishment, and extinction


Reinforcement and punishment, the core tools of operant conditioning, are either positive
(delivered following a response), or negative (withdrawn following a response). This
creates a total of four basic consequences, with the addition of a fifth procedure known as
extinction (i.e. no change in consequences following a response)

It's important to note that organisms are not spoken of as being reinforced, punished, or
extinguished; it is the response that is reinforced, punished, or extinguished. Additionally,
reinforcement, punishment, and extinction are not terms whose use is restricted to the
laboratory. Naturally occurring consequences can also be said to reinforce, punish, or
extinguish behavior and are not always delivered by people.

• Reinforcement is a consequence that causes a behavior to occur with greater


frequency.
• Punishment is a consequence that causes a behavior to occur with less frequency.
• Extinction is the lack of any consequence following a behavior. When a behavior
is inconsequential, producing neither favorable nor unfavorable consequences, it
will occur with less frequency. When a previously reinforced behavior is no
longer reinforced with either positive or negative reinforcement, it leads to a
decline in the response.

Four contexts of operant conditioning: Here the terms "positive" and "negative" are not
used in their popular sense, but rather: "positive" refers to addition, and "negative" refers
to subtraction.

What is added or subtracted may be either reinforcement or punishment. Hence positive


punishment is sometimes a confusing term, as it denotes the addition of a stimulus or
increase in the intensity of a stimulus that is aversive (such as spanking or an electric
shock) The four procedures are:

1. Positive reinforcement (Reinforcement) occurs when a behavior (response) is


followed by a stimulus that is rewarding, increasing the frequency of that
behavior. In the Skinner box experiment, a stimulus such as food or sugar solution
can be delivered when the rat engages in a target behavior, such as pressing a
lever.
2. Negative reinforcement (Escape) occurs when a behavior (response) is followed
by the removal of an unpleasant stimulus, thereby increasing that behavior's
frequency. In the Skinner box experiment, negative reinforcement can be a loud
noise continuously sounding inside the rat's cage until it engages in the target
behavior, such as pressing a lever, upon which the loud noise is removed.
3. Positive punishment (Punishment) (also called "Punishment by contingent
stimulation") occurs when a behavior (response) is followed by a stimulus, such
as introducing a shock or loud noise, resulting in a decrease in that behavior.
4. Negative punishment (Penalty) (also called "Punishment by contingent
withdrawal") occurs when a behavior (response) is followed by the removal of a
stimulus, such as taking away a child's toy following an undesired behavior,
resulting in a decrease in that behavior.

Also:

• Avoidance learning is a type of learning in which a certain behavior results in the


cessation of an aversive stimulus. For example, performing the behavior of
shielding one's eyes when in the sunlight (or going indoors) will help avoid the
aversive stimulation of having light in one's eyes.
• Extinction occurs when a behavior (response) that had previously been reinforced
is no longer effective. In the Skinner box experiment, this is the rat pushing the
lever and being rewarded with a food pellet several times, and then pushing the
lever again and never receiving a food pellet again. Eventually the rat would cease
pushing the lever.
• Noncontingent reinforcement refers to delivery of reinforcing stimuli regardless
of the organism's (aberrant) behavior. The idea is that the target behavior
decreases because it is no longer necessary to receive the reinforcement. This
typically entails time-based delivery of stimuli identified as maintaining aberrant
behavior, which serves to decrease the rate of the target behavior.[2] As no
measured behavior is identified as being strengthened, there is controversy
surrounding the use of the term noncontingent "reinforcement".[3]
• Shaping is a form of operant conditioning in which the increasingly accurate
approximations of a desired response are reinforced.[4]
• Chaining is an instructional procedure which involves reinforcing individual
responses occurring in a sequence to form a complex behavior.[4]

Thorndike's law of effect


Main article: Law of effect

Operant conditioning, sometimes called instrumental conditioning or instrumental


learning, was first extensively studied by Edward L. Thorndike (1874–1949), who
observed the behavior of cats trying to escape from home-made puzzle boxes.[5] When
first constrained in the boxes, the cats took a long time to escape. With experience,
ineffective responses occurred less frequently and successful responses occurred more
frequently, enabling the cats to escape in less time over successive trials. In his Law of
Effect, Thorndike theorized that successful responses, those producing satisfying
consequences, were "stamped in" by the experience and thus occurred more frequently.
Unsuccessful responses, those producing annoying consequences, were stamped out and
subsequently occurred less frequently. In short, some consequences strengthened
behavior and some consequences weakened behavior. Thorndike produced the first
known learning curves through this procedure. B.F. Skinner (1904–1990) formulated a
more detailed analysis of operant conditioning based on reinforcement, punishment, and
extinction. Following the ideas of Ernst Mach, Skinner rejected Thorndike's mediating
structures required by "satisfaction" and constructed a new conceptualization of behavior
without any such references. So, while experimenting with some homemade feeding
mechanisms, Skinner invented the operant conditioning chamber which allowed him to
measure rate of response as a key dependent variable using a cumulative record of lever
presses or key pecks.[6]

Biological correlates of operant conditioning


The first scientific studies identifying neurons that responded in ways that suggested they
encode for conditioned stimuli came from work by Mahlon deLong[7][8] and by R.T.
"Rusty" Richardson and deLong.[8] They showed that nucleus basalis neurons, which
release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a
conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These
neurons are equally active for positive and negative reinforcers, and have been
demonstrated to cause plasticity in many cortical regions.[9] Evidence also exists that
dopamine is activated at similar times. There is considerable evidence that dopamine
participates in both reinforcement and aversive learning.[10] Dopamine pathways project
much more densely onto frontal cortex regions. Cholinergic projections, in contrast, are
dense even in the posterior cortical regions like the primary visual cortex. A study of
patients with Parkinson's disease, a condition attributed to the insufficient action of
dopamine, further illustrates the role of dopamine in positive reinforcement.[11] It showed
that while off their medication, patients learned more readily with aversive consequences
than with positive reinforcement. Patients who were on their medication showed the
opposite to be the case, positive reinforcement proving to be the more effective form of
learning when the action of dopamine is high.

Factors that alter the effectiveness of consequences


When using consequences to modify a response, the effectiveness of a consequence can
be increased or decreased by various factors. These factors can apply to either reinforcing
or punishing consequences.

1. Satiation/Deprivation: The effectiveness of a consequence will be reduced if the


individual's "appetite" for that source of stimulation has been satisfied. Inversely,
the effectiveness of a consequence will increase as the individual becomes
deprived of that stimulus. If someone is not hungry, food will not be an effective
reinforcer for behavior. Satiation is generally only a potential problem with
primary reinforcers, those that do not need to be learned such as food and water.
2. Immediacy: After a response, how immediately a consequence is then felt
determines the effectiveness of the consequence. More immediate feedback will
be more effective than less immediate feedback. If someone's license plate is
caught by a traffic camera for speeding and they receive a speeding ticket in the
mail a week later, this consequence will not be very effective against speeding.
But if someone is speeding and is caught in the act by an officer who pulls them
over, then their speeding behavior is more likely to be affected.
3. Contingency: If a consequence does not contingently (reliably, or consistently)
follow the target response, its effectiveness upon the response is reduced. But if a
consequence follows the response consistently after successive instances, its
ability to modify the response is increased. The schedule of reinforcement, when
consistent, leads to faster learning. When the schedule is variable the learning is
slower. Extinction is more difficult when learning occurred during intermittent
reinforcement and more easily extinguished when learning occurred during a
highly consistent schedule.
4. Size: This is a "cost-benefit" determinant of whether a consequence will be
effective. If the size, or amount, of the consequence is large enough to be worth
the effort, the consequence will be more effective upon the behavior. An
unusually large lottery jackpot, for example, might be enough to get someone to
buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery
jackpot is small, the same person might not feel it to be worth the effort of driving
out and finding a place to buy a ticket. In this example, it's also useful to note that
"effort" is a punishing consequence. How these opposing expected consequences
(reinforcing and punishing) balance out will determine whether the behavior is
performed or not.

Most of these factors exist for biological reasons. The biological purpose of the Principle
of Satiation is to maintain the organism's homeostasis. When an organism has been
deprived of sugar, for example, the effectiveness of the taste of sugar as a reinforcer is
high. However, as the organism reaches or exceeds their optimum blood-sugar levels, the
taste of sugar becomes less effective, perhaps even aversive.

The principles of Immediacy and Contingency exist for neurochemical reasons. When an
organism experiences a reinforcing stimulus, dopamine pathways in the brain are
activated. This network of pathways "releases a short pulse of dopamine onto many
dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic
neurons."[12] This results in the plasticity of these synapses allowing recently activated
synapses to increase their sensitivity to efferent signals, hence increasing the probability
of occurrence for the recent responses preceding the reinforcement. These responses are,
statistically, the most likely to have been the behavior responsible for successfully
achieving reinforcement. But when the application of reinforcement is either less
immediate or less contingent (less consistent), the ability of dopamine to act upon the
appropriate synapses is reduced.

Operant variability
Operant variability is what allows a response to adapt to new situations. Operant behavior
is distinguished from reflexes in that its response topography (the form of the response)
is subject to slight variations from one performance to another. These slight variations
can include small differences in the specific motions involved, differences in the amount
of force applied, and small changes in the timing of the response. If a subject's history of
reinforcement is consistent, such variations will remain stable because the same
successful variations are more likely to be reinforced than less successful variations.
However, behavioral variability can also be altered when subjected to certain controlling
variables.[13]

Avoidance learning
Avoidance training belongs to negative reinforcement schedules. The subject learns that a
certain response will result in the termination or prevention of an aversive stimulus.
There are two kinds of commonly used experimental settings: discriminated and free-
operant avoidance learning.
Discriminated avoidance learning

In discriminated avoidance learning, a novel stimulus such as a light or a tone is


followed by an aversive stimulus such as a shock (CS-US, similar to classical
conditioning). During the first trials (called escape-trials) the animal usually
experiences both the CS (Conditioned Stimulus) and the US (Unconditioned
Stimulus), showing the operant response to terminate the aversive US. By the
time, the animal will learn to perform the response already during the presentation
of the CS thus preventing the aversive US from occurring. Such trials are called
avoidance trials.

Free-operant avoidance learning

In this experimental session, no discrete stimulus is used to signal the occurrence


of the aversive stimulus. Rather, the aversive stimulus (mostly shocks) are
presented without explicit warning stimuli. There are two crucial time intervals
determining the rate of avoidance learning. This first one is called the S-S-interval
(shock-shock-interval). This is the amount of time which passes during successive
presentations of the shock (unless the operant response is performed). The other
one is called the R-S-interval (response-shock-interval) which specifies the length
of the time interval following an operant response during which no shocks will be
delivered. Note that each time the organism performs the operant response, the R-
S-interval without shocks begins anew.

Two-process theory of avoidance


This theory was originally established to explain learning in discriminated avoidance
learning. It assumes two processes to take place. a) Classical conditioning of fear. During
the first trials of the training, the organism experiences both CS and aversive US (escape-
trials). The theory assumed that during those trials classical conditioning takes place by
pairing the CS with the US. Because of the aversive nature of the US the CS is supposed
to elicit a conditioned emotional reaction (CER) - fear. In classical conditioning,
presenting a CS conditioned with an aversive US disrupts the organism's ongoing
behavior. b) Reinforcement of the operant response by fear-reduction. Because during the
first process, the CS signaling the aversive US has itself become aversive by eliciting fear
in the organism, reducing this unpleasant emotional reaction serves to motivate the
operant response. The organism learns to make the response during the US, thus
terminating the aversive internal reaction elicited by the CS. An important aspect of this
theory is that the term "Avoidance" does not really describe what the organism is doing.
It does not "avoid" the aversive US in the sense of anticipating it. Rather the organism
escapes an aversive internal state, caused by the CS.

Verbal Behavior
Main article: Verbal Behavior (book)
In 1957, Skinner published Verbal Behavior, a theoretical extension of the work he had
pioneered since 1938. This work extended the theory of operant conditioning to human
behavior previously assigned to the areas of language, linguistics and other areas. Verbal
Behavior is the logical extension of Skinner's ideas, in which he introduced new
functional relationship categories such as intraverbals, autoclitics, mands, tacts and the
controlling relationship of the audience. All of these relationships were based on operant
conditioning and relied on no new mechanisms despite the introduction of new functional
categories.

Four term contingency


Applied behavior analysis, which is the name of the discipline directly descended from
Skinner's work, holds that behavior is explained in four terms: conditional stimulus (SC),
a discriminative stimulus (Sd), a response (R), and a reinforcing stimulus (Srein or Sr for
reinforcers, sometimes Save for aversive stimuli).[14]

Operant hoarding
Operant hoarding is a referring to the choice made by a rat, on a compound schedule
called a multiple schedule, that maximizes its rate of reinforcement in an operant
conditioning context. More specifically, rats were shown to have allowed food pellets to
accumulate in a food tray by continuing to press a lever on a continuous reinforcement
schedule instead of retrieving those pellets. Retrieval of the pellets always instituted a
one-minute period of extinction during which no additional food pellets were available
but those that had been accumulated earlier could be consumed. This finding appears to
contradict the usual finding that rats behave impulsively in situations in which there is a
choice between a smaller food object right away and a larger food object after some
delay. See schedules of reinforcement.[15]

An alternative to the Law of Effect


However, an alternative perspective has been proposed by R. Allen and Beatrix Gardner.
[16][17]
Under this idea, which they called "feedforward", animals learn during operant
conditioning by simple pairing of stimuli, rather than by the consequences of their
actions. Skinner asserted that a rat or pigeon would only manipulate a lever if rewarded
for the action, a process he called shaping (reward for approaching then manipulating a
lever).[18] However, in order to prove the necessity of reward (reinforcement) in lever
pressing, a control condition where food is delivered without regard to behavior must also
be conducted. Skinner never published this control group. Only much later was it found
that rats and pigeons do indeed learn to manipulate a lever when food comes irrespective
of behavior. This phenomenon is known as autoshaping.[19] Autoshaping demonstrates
that consequence of action is not necessary in an operant conditioning chamber, and it
contradicts the Law of Effect. Further experimentation has shown that rats naturally
handle small objects, such as a lever, when food is present.[20] Rats seem to insist on
handling the lever when free food is available (contra-freeloading)[21][22] and even when
pressing the lever leads to less food (omission training).[23][24] Whenever food is presented,
rats handle the lever, regardless if lever pressing leads to more food. Therefore, handling
a lever is a natural behavior that rats do as preparatory feeding activity, and in turn, lever
pressing cannot logically be used as evidence for reward or reinforcement to occur. In the
absence of evidence for reinforcement during Operant conditioning, learning which
occurs during Operant experiments is actually only Pavlovian (Classical) conditioning.
The dichotomy between Pavlovian and Operant conditioning is therefore an inappropriate
separation.

See also
• Applied behavior analysis, the • Educational Psychology portal
application of operant technology
behaviorism • Experimental • Reinforcement
• Behaviorism, the family of analysis of learning
philosophies behind operant behavior • Reward system
conditioning • Exposure therapy • Sensitization
• Cognitivism (psychology), a • Habituation • Social
competing theory that invokes • Matching law conditioning
internal mechanisms without • Negative
reference to behavior (positive) • Spontaneous
contrast effect recovery
• Educational psychology
• Premack
principle

References
1. ^ Domjan, Michael, Ed., The Principles of Learning and Behavior, Fifth Edition,
Belmont, CA: Thomson/Wadsworth, 2003
2. ^ Tucker, M., Sigafoos, J., & Bushell, H. (1998). Use of noncontingent
reinforcement in the treatment of challenging behavior. Behavior Modification,
22, 529–547.
3. ^ Poling, A., & Normand, M. (1999). Noncontingent reinforcement: an
inappropriate description of time-based schedules that reduce behavior. Journal of
Applied Behavior Analysis, 32, 237-238.
4. ^ a b http://www.bbbautism.com/aba_shaping_and_chaining.htm
5. ^ Thorndike, E. L. (1901). Animal intelligence: An experimental study of the
associative processes in animals. Psychological Review Monograph Supplement,
2, 1-109.
6. ^ Mecca Chiesa (2004) Radical Behaviorism: the philosophy and the science
7. ^ "Activity of pallidal neurons during movement", M. R. DeLong, J.
Neurophysiol., 34:414-27, 1971
8. ^ a b Richardson RT, DeLong MR (1991): Electrophysiological studies of the
function of the nucleus basalis in primates. In Napier TC, Kalivas P, Hamin I
(eds), The Basal Forebrain: Anatomy to Function (Advances in Experimental
Medicine and Biology, vol. 295. New York, Plenum, pp 232-252
9. ^ PNAS 93:11219-24 1996, Science 279:1714-8 1998
10. ^ Neuron 63:244-253, 2009, Frontiers in Behavioral Neuroscience, 3: Article 13,
2009
11. ^ Michael J. Frank, Lauren C. Seeberger, and Randall C. O'Reilly (2004) "By
Carrot or by Stick: Cognitive Reinforcement Learning in Parkinsonism," Science
4, November 2004
12. ^ Schultz, Wolfram (1998). Predictive Reward Signal of Dopamine Neurons. The
Journal of Neurophysiology, 80(1), 1-27.
13. ^ Neuringer, A. (2002). Operant variability: Evidence, functions, and theory.
Psychonometric Bulletin & Review, 9(4), 672-705.
14. ^ Pierce & Cheney (2004) Behavior Analysis and Learning
15. ^ Cole, M.R. (1990). Operant hoarding: A new paradigm for the study of self-
control. Journal of the Experimental Analysis of Behavior, 53, 247-262.
16. ^ Gardner, R. A., & Gardner, B. T. (1988). Feedforward vs feedbackward: An
ethological alternative to the law of effect. Behavioral and Brain Sciences.
11:429-447.
17. ^ Gardner, R. A. & Gardner, B. T. (1998). The structure of learning from sign
stimuli to sign language. Mahwah NJ: Lawrence Erlbaum Associates.
18. ^ Skinner, B. F. (1953). Science and human behavior. Oxford, England:
Macmillan.
19. ^ Brown, P., & Jenkins, H. M. (1968). Autoshaping of the pigeon's key-peck. J.
Exp. Anal. Behav. 11:1-8.
20. ^ Timberlake, W. (1983). Rats’ responses to a moving object related to food or
water: A behavior-systems analysis. Animal Learning & Behavior. 11(3):309-
320.
21. ^ Jensen, G. D. (1963). Preference for bar pressing over 'freeloading' as a function
of number of rewarded presses. Journal of Experimental Psychology. 65:451-454.
22. ^ Neuringer, A.J. (1969). Animals respond for food in the presence of free food.
Science. 166:399-401.
23. ^ Williams, D.R. and Williams, H. (1969). Auto-maintenance in the pigeon:
sustained pecking despite contingent non-reinforcement. J. Exper. Analys. of
Behav. 12:511-520.
24. ^ Peden, B. F., Brown, M. P., & Hearst, E. (1977). Persistent approaches to a
signal for food despite food omission for approaching. Journal of Experimental
Psychology: Animal Behavior Processes. 3(4):377-399.

External links
• Behavioural Processes
• Journal of Applied Behavior Analysis
• Journal of the Experimental Analysis of Behavior
• Negative reinforcement
• Scholarpedia Operant conditioning
• scienceofbehavior.com
• Society for Quantitative Analysis of Behavior[1]

[hide]
v•d•e
Learning

Simple
non-
Habituation · Sensitization
associative
learning

Associative Operant conditioning · Classical conditioning · Imprinting · Observational


learning learning

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have
been reviewed by professional editors (see full disclaimer)

Donate to Wikimedia

Related topics:
two-process theory
operant technique
reinforcer

Related answers:
What is race condition in operating systems? Read answer...
What are the similarities between classical conditioning and operant conditioning?
Read answer...
In which condition zener diode operates? Read answer...

Help us answer these:


What is conditional operator?
What behaviors are learned in operant conditioning?
Similarities of classical conditioning and operant conditioning?

Post a question - any question - to the WikiAnswers community:


Copyrights:

Dictionary. The American Heritage® Dictionary of


the English Language, Fourth Edition Copyright ©
2007, 2000 by Houghton Mifflin Company. Updated
in 2009. Published by Houghton Mifflin Company.
All rights reserved. Read more
Sci-Tech Encyclopedia. McGraw-Hill Encyclopedia
of Science and Technology. Copyright © 2005 by
The McGraw-Hill Companies, Inc. All rights
reserved. Read more
Medical Dictionary. The American Heritage®
Stedman's Medical Dictionary Copyright © 2002,
2001, 1995 by Houghton Mifflin Company. Read
more
WordNet. WordNet 1.7.1 Copyright © 2001 by
Princeton University. All rights reserved. Read more
Wikipedia. This article is licensed under the Creative
Commons Attribution/Share-Alike License. It uses
material from the Wikipedia article Operant
conditioning. Read more

Site

• Help | Sitemap
• ReferenceAnswers
• WikiAnswers
• VideoAnswers

Company

• About | Jobs
• Press Room
• Blog
• Investor Relations

Legal

• Terms of Use
• Privacy Policy
• IP Issues
• Disclaimer

Webmasters
• AnswerTips
• Widget Gallery
• Badges
• More…

Downloads

• 1-Click Answers
• Browser Toolbar
• More…

Updates

• Email
• Watchlist
• RSS
• What's New

International Sites English Deutsch Español Français Italiano Tagalog

Copyright © 2010 Answers Corporation

Vous aimerez peut-être aussi