Vous êtes sur la page 1sur 51

CONNECTIONISM (EDWARD L.

THORNDIKE – 1898)
The prominent role of Aristotle’s laws of association in the 1900s may largely be due to the
work of Edward L. Thorndike—the recognized founder of a “learning theory [that] dominated all
others in America” for “nearly half a century” (Bower & Hilgard, 1981, p. 21). Thorndike’s
theory was based initially on a series of puzzle box experiments that he used to plot learning
curves of animals. In these experiments learning was defined as a function of the amount of
time required for the animal to escape from the box. A full account of his experiments,
including detailed descriptions of the puzzle boxes he used and examples of learning curves
that were plotted, can be found in Animal intelligence (Thorndike, 1898).

In Thorndike’s view, learning is the process of forming associations or bonds, which he defined
as “the connection of a certain act with a certain situation and resultant pleasure” (p. 8). His
work leading up to 1898 provided “the beginning of an exact estimate of just what
associations, simple and compound, an animal can form, how quickly he forms them, and how
long he retains them” (p. 108).

Although his original experimental subjects were cats, dogs, and chicks, Thorndike clearly
expressed his intention of applying his work to human learning when he said, “the main
purpose of the study of the animal mind is to learn the development of mental life down
through the phylum, to trace in particular the origin of human faculty” (1898, p. 2). From his
work with animals he inferred “as necessary steps in the evolution of human faculty, a vast
increase in the number of associations” (p. 108). A decade and a half later he expanded on the
theme of human learning in a three volume series entitled, Educational psychology, with volume
titles, The original nature of man (1913a), The psychology of learning (1913b), and Mental work
and fatigue and individual differences and their causes (1914b). The material in these books
was very comprehensive and targeted advanced students of psychology. He summarized the
fundamental subject matter of the three volumes in a single, shorter textbook
entitled, Educational psychology: briefer course (Thorndike, 1914a). In these volumes
Thorndike provided a formative culmination of his theory of learning in the form of three laws
of learning:

1. Law of Readiness – The law of readiness was intended to account for the motivational aspects
of learning and was tightly coupled to the language of the science of neurology. It was defined
in terms of the conduction unit, which term Thorndike (1914a) used to refer to “the neuron,
neurons, synapse, synapses, part of a neuron, part of a synapse, parts of neurons or parts of
synapses—whatever makes up the path which is ready for conduction” (p. 54). In its most
concise form, the law of readiness was stated as follows, “for a conduction unit ready to
conduct to do so is satisfying, and for it not to do so is annoying” (p. 54). The law of readiness
is illustrated through two intuitive examples given by Thorndike:

The sight of the prey makes the animal run after it, and also puts the conductions and
connections involved in jumping upon it when near into a state of excitability or readiness to be
made….When a child sees an attractive object at a distance, his neurons may be said to
prophetically prepare for the whole series of fixating it with the eyes, running toward it, seeing
it within reach, grasping, feeling it in his hand, and curiously manipulating it. (p. 53)

2. Law of Exercise – The law of exercise had two parts: (a) the law of use and (b) the law of
disuse. This law stated that connections grow stronger when used—where strength is defined
as “vigor and duration as well as the frequency of its making” (p. 70)—and grow weaker when
not used.

3. Law of Effect – The law of effect added to the law of exercise the notion that connections are
strengthened only when the making of the connection results in a satisfying state of affairs and
that they are weakened when the result is an annoying state of affairs.

These three laws were supplemented by five characteristics of learning “secondary in scope and
importance only to the laws of readiness, exercise, and effect” (Thorndike, 1914a, p. 132). They
are

1. Multiple response or varied reaction – When faced with a problem an animal will try one
response after another until it finds success.

2. Set or attitude – The responses that an animal will try, and the results that it will find
satisfying, depend largely on the animal’s attitude or state at the time.

The chick, according to his age, hunger, vitality, sleepiness, and the like, may be in one or
another attitude toward the external situation. A sleepier and less hungry chick will, as a rule,
be ‘set’ less toward escape-movements when confined; its neurons involved in roaming,
perceiving companions and feeding will be less ready to act; it will not, in popular language, ‘try
so hard to’ get out or ‘care so much about’ being out. (Thorndike, 1914a, p. 133)

3. Partial activity or prepotency of elements – Certain features of a situation may be prepotent


in determining a response than others and an animal is able to attend to critical elements and
ignore less important ones. This ability to attend to parts of a situation makes possible
response by analogy and learning through insight.

Similarly, a cat that has learned to get out of a dozen boxes—in each case by pulling some
loop, turning some bar, depressing a platform, or the like—will, in a new box, be, as we say,
‘more attentive to’ small objects on the sides of the box than it was before. The connections
made may then be, not absolutely with the gross situation as a total, but predominantly with
some element or elements of it. (Thorndike, 1914a, p. 134)

4. Assimilation – Due to the assimilation of analogous elements between two stimuli, an animal
will respond to a novel stimulus in the way it has previously responded to a similar stimulus. In
Thorndike’s words, “To any situations, which have no special original or acquired response of
their own, the response made will be that which by original or acquired nature is connected
with some situation which they resemble.” (Thorndike, 1914a, p. 135)

5. Associative shifting – Associative shifting refers to the transfer of a response evoked by a


given stimulus to an entirely different stimulus.

The ordinary animal ‘tricks’ in response to verbal signals are convenient illustrations. One, for
example, holds up before a cat a bit of fish, saying, “Stand up.” The cat, if hungry enough, and
not of fixed contrary habit, will stand up in response to the fish. The response, however,
contracts bonds also with the total situation, and hence to the human being in that position
giving that signal as well as to the fish. After enough trials, by proper arrangement, the fish can
be omitted, the other elements of the situation serving to evoke the response. Association may
later be further shifted to the oral signal alone. (Thorndike, 1914a, p. 136)

Sixteen years after publishing his theory in the Educational Psychology series based on
experiments with animals, Thorndike published twelve lectures that reported on experiments
performed with human subjects between 1927 and 1930 (see Thorndike, 1931). The results of
these experiments led Thorndike to make some modifications to his laws of connectionism.

The first change was to qualify the law of exercise. It was shown that the law of exercise, in and
of itself, does not cause learning, but is dependent upon the law of effect. In an experiment in
which subjects were blindfolded and repeatedly asked to draw a four-inch line with one quick
movement Thorndike discovered that doing so 3,000 times “caused no learning” because the
lines drawn in the eleventh or twelfth sittings were “not demonstrably better than or different
from those drawn in the first or second” (Thorndike, 1931, p. 10). He summarized this finding
by saying,

Our question is whether the mere repetition of a situation in and of itself causes learning, and
in particular whether the more frequent connections tend, just because they are more frequent,
to wax in strength at the expense of the less frequent. Our answer is No. (p. 13)

However, in drawing this conclusion, Thorndike was not disproving the law of exercise, but
merely qualifying it (by saying that repetition must be guided by feedback):

It will be understood, of course, that repetition of a situation is ordinarily followed by learning,


because ordinarily we reward certain of the connections leading from it and punish others by
calling the responses to which they respectively lead right or wrong, or by otherwise favoring
and thwarting them. Had I opened my eyes after each shove of the pencil during the second and
later sittings and measured the lines and been desirous of accuracy in the task, the connections
leading to 3.8, 3.9, 4.0, 4.1, and 4.2 would have become more frequent until I reached my limit
of skill in the task. (p. 12-13)

The second change was to recast the relative importance of reward and punishment under the
law of effect. Through a variety of experiments Thorndike concluded that satisfiers (reward) and
annoyers (punishment) are not equal in their power to strengthen or weaken a connection,
respectively. In one of these experiments students learned Spanish vocabulary by selecting for
each Spanish word one of five possible English meanings followed by the rewarding feedback
of being told “Right” or the punishing feedback of being told “Wrong.” From the results of this
experiment Thorndike concluded that punishment does not diminish response as originally
stated in the law of effect. In his own words,

Indeed the announcement of “Wrong” in our experiments does not weaken the connection at all,
so far as we can see. Rather there is more gain in strength from the occurrence of the response
than there is weakening by the attachment of “Wrong” to it. Whereas two occurrences of a right
response followed by “Right” strengthen the connection much more than one does, two
occurrences of a wrong response followed by “Wrong” weaken that connection less than one
does. (p. 45)

In another experiment a series of words were read by the experimenter. The subject responded
to each by stating a number between 1 and 10. If the subject picked the number the
experimenter had predetermined to be “right” he was rewarded (the experimenter said “Right”),
otherwise he was punished (the experimenter said “Wrong”). Other than the feedback received
from the experimenter, the subject had no logical basis for selecting one number over another
when choosing a response. Each series was repeated many times, however, the sequence of
words was long, making it difficult for the subject to consciously remember any specific right
and wrong word-number pairs. From the results of this and other similar experiments
Thorndike demonstrated what he called the “spread of effect.” What he meant by this was that
“punished connections do not behave alike, but that the ones that are nearest to a reward are
strengthened” and that “the strengthening influence of a reward spreads to influence positively
not only the connection which it directly follows…but also any connections which are near
enough to it” (Thorndike, 1933, p. 174). More specifically,

A satisfying after-effect strengthens greatly the connection which it follows directly and to
which it belongs, and also strengthens by a smaller amount the connections preceding and
following that, and by a still smaller amount the preceding and succeeding connections two
steps removed. (p. 174)
In addition to these two major changes to the law of exercise and the law of effect, Thorndike
also began to explore four other factors of learning that might be viewed as precursors to
cognitive learning research, which emerged in the decades that followed. They are summarized
by Bower and Hilgard (1981):

1. Belongingness – “a connection between two units or ideas is more readily established if


the subject perceives the two as belonging or going together” (p. 35).
2. Associative Polarity – “connections act more easily in the direction in which they were
formed than in the opposite direction” (p. 35). For example, if when learning German
vocabulary a person always tests themselves in the German-to-English direction it is
more difficult for them to give the German equivalent when prompted with an English
word than to give the English word when prompted with the German equivalent.
3. Stimulus Identifiability – “a situation is easy to connect to a response to the extent that
the situation is identifiable, distinctive, and distinguishable from others in a learning
series” (p. 36).
4. Response Availability – the ease of forming connections is directly proportional to the
ease with which the response required by the situation is summoned or executed:

Some responses are overlearned as familiar acts (e.g., touching our nose, tapping our toes)
which are readily executed upon command, whereas more finely skilled movements (e.g.,
drawing a line 4 inches as opposed to 5 inches long while blindfolded) may not be so readily
summonable. (p.36-37)

What is conditioning?
Conditioning is a type of learning that links some sort of trigger or stimulus
to a human behavior or response. When psychology was first starting as a
field, scientists felt they couldn’t objectively describe what was going on in
people’s heads. However, they could observe behaviors so that’s what they
focused on in their experiments. The major theories about learning come
from the conclusions drawn from these experiments.

What is classical conditioning?


Imagine your favorite snack is peanut butter and jelly sandwiches. Whenever
you get that snack, it makes you happy and you start to jump around, doing
your happy PB&J dance. Your sandwich always comes on the same plate –
it’s big and orange and has a picture of a tiger on it. Eventually, you might
start doing your PB&J dance whenever you see your tiger plate on the table,
in anticipation of the sandwich arriving.

Cartoon explaining what classical conditioning is.

This type of conditioning is called classical conditioning. The presence of the


plate has caused you to have the same reaction as having a PB&J sandwich.
The sandwich is our stimulus (the unconditioned stimulus) and it elicits the
dance which is our response (the unconditioned response). “Unconditioned”
refers to the fact that no learning took place to connect the stimulus and
response - you saw the the sandwich and automatically got so excited you
start to dance (like a reflex!).

Cartoon explaining what an unconditioned response is as well as a neutral


stimulus.

The plate starts off as a neutral stimulus and elicits no reaction on its own. As
it is continuously paired with the sandwich, the plate becomes a conditioned
stimulus and elicits a conditioned response in the form of your happy dance.
Over time, you have learned to connect the plate and the feelings of
happiness that cause you to dance.

Cartoon showing how the tiger plate turns from a neutral stimulus to a
conditioned stimulus over time.
Also interesting to think about is just why it is you dance when you see that
sandwich in the first place. Earlier, we stated that it is was the unconditioned
stimulus because it took no learning to cause you to dance at the sight of it.
At the start of our thought experiment, that was true. However, when you
were first introduced to PB&J, you would dance while eating it because it
tasted so good. Eventually, an association between sight and taste formed
(learned via classical conditioning) and you began to dance preemptively -
just the sight was enough to trigger the feelings of joy expressed by the
dance.. If we really follow this line of thought about our everyday actions,
we’ll find that many, if not most, of our actions can be traced back to pretty
basic needs like food, shelter, comfort, etc.

A cartoon showing a person expressing that they "feel like dancing" every
time they see the tiger plate.

What is operant conditioning?


In classical conditioning, the stimuli that precede a behavior will vary (PB&J
sandwich, then tiger plate), to alter that behavior(e.g. dancing with the tiger
plate!). In operant conditioning, the consequences which come after a
behavior will vary, to alter that behavior. Imagine years down the road you
are still enamored of delicious PB&J sandwiches, and now are trying to teach
yourself to be a good roommate. The house rule is that whoever leaves their
dishes unwashed the longest has to take out the trash. You hate taking out the
trash, so you develop a system - whenever you remember to wash your plate,
you are allowed to surf the internet, otherwise you’re not allowed. The more
dishes you wash, the more you get to procrastinate on your favorite sites.
Initially, you leave the plate in the sink a few times, then you begin to
remember after a day or so, and finally you start to wash your dishes
immediately after using them. This process of shaping involves intermediate
behaviors (leaving the plate in the sink and beginning to come back to wash
the dishes within hours) that start moving you towards the goal behavior
(washing your dishes immediately).

Cartoon showing three different days and how operant conditioning works in
the context of washing dishes and going on the internet.

How do we influence behavior?


Operant conditioning changes behaviors by using consequences, and these
consequences will have two characteristics:

1. Reinforcement or punishment
-Reinforcement is a response or consequence that causes a behavior to occur
with greater frequency.

-Punishment is a response or consequence that causes a behavior to occur


with less frequency.

2. Positive or negative
-Positive means adding a new stimulus.

-Negative means removing an old stimulus.

There end up being 4 different ways we can affect behavior with operant
conditioning:
negative reinforcement - positive reinforcement

negative punishment - positive punishment

Let’s go back to our example of washing the dishes, and consider the four
different types of operant conditioning based consequences. If you leave the
dish on the table instead of washing it, some sort of punishment will happen
because this is an undesired behavior.

 Positive punishment: You will get a new chore such as sweeping the floors!
(adding a new stimulus).
 Negative punishment: You will not get to eat the usual apple pie dessert
(removing an old stimulus)
If you remember to wash your plate, some sort of reinforcement will happen
because this is a desired behavior.

 Positive reinforcement: You will get to make one online purchase! (adding a
new stimulus).
 Negative reinforcement: You won’t have to take out the trash this week, a
standard chore (removing an old stimulus).

How effective is the conditioning?


Imagine your tiger plate was one of a set of plates – jungle cat plates. There is
a lion, a jaguar, and a leopard as well

Cartoon showing the different types of animal plates in the set.


They’re all generally the same shape and color, so you react to these plates
the same way you reacted to the tiger plate, (the original conditioned
stimulus) and do your happy dance. We call this generalization – when a
conditioned response (happy dance) occurs in reaction to a stimulus (jungle
cat plates) other than (but often similar to) the conditioned one (tiger plate).
A good way to remember is that now you do a happy dance for cat plates in
general. The opposite of generalization is discrimination - the ability to tell
different stimuli apart and react only to certain ones. You show
discrimination whenever you don’t dance because you can tell the difference
between the peanut butter and the pickle jars, for example, or by dancing only
at snack time, since you know that’s the only time the PB&J happens.

Imagine that you’ve run out of peanut butter, so you’re stuck with tuna salad
for weeks (oh no!). Your parents try to make it better by serving it on your
favorite tiger plate, but you soon realize the tiger plate does not mean PB&J.
You lose the association between the tiger plate and PB&J, and stop doing
your happy dance whenever you see that plate. We call this extinction – your
conditioned response (happy dance) disappeared. However, when peanut
butter in your house again and your parents serve you PB&J on your tiger
plate, the previous association between the tiger plate and PB&J dance
quickly will come back in full force. We call this spontaneous recovery.

While the discussion above focused on our examples from classical


conditioning, the same concepts can be applied to operant conditioning as
well. Maybe your chore scheme works so well you begin to wipe down the
kitchen counters whenever you make a big meal, or you refuse to allow
yourself pie if you haven’t folded your laundry.
What are examples of conditioning in your daily
life?
Conditioning, both classical and operant, can be seen throughout our daily
lives. Insurance companies will charge you more if you keep getting into
accidents (negative punishment) or give you congratulatory certificates for
safer driving (positive reinforcement). When driving, seeing flashing lights in
your rearview mirror coupled with a siren will cause a gut feeling of dread
even before the officer comes by with your ticket. Maybe it’s not even you
they’re pulling over, but those signals (conditioned stimuli) are so associated
with tickets and fines (unconditioned stimuli) that you can feel it in your
stomach (conditioned response). Now that we’ve explored conditioning
some, be on the lookout for examples in your day to day life, and maybe even
consider using some of those techniques on yourself – for every hour and a
half of studying, give yourself a ten minute break to stretch and watch funny
videos or walk around!

Operant Conditioning (B.F.


Skinner)
The theory of B.F. Skinner is based upon the idea that learning is a function of
change in overt behavior. Changes in behavior are the result of an individual’s
response to events (stimuli) that occur in the environment. A response produces
a consequence such as defining a word, hitting a ball, or solving a math problem.
When a particular Stimulus-Response (S-R) pattern is reinforced (rewarded), the
individual is conditioned to respond. The distinctive characteristic of operant
conditioning relative to previous forms of behaviorism (e.g., connectionism, drive
reduction) is that the organism can emit responses instead of only eliciting
response due to an external stimulus.
Reinforcement is the key element in Skinner’s S-R theory. A reinforcer is
anything that strengthens the desired response. It could be verbal praise, a good
grade or a feeling of increased accomplishment or satisfaction. The theory also
covers negative reinforcers — any stimulus that results in the increased
frequency of a response when it is withdrawn (different from adversive stimuli —
punishment — which result in reduced responses). A great deal of attention was
given to schedules of reinforcement (e.g. interval versus ratio) and their effects
on establishing and maintaining behavior.

One of the distinctive aspects of Skinner’s theory is that it attempted to provide


behavioral explanations for a broad range of cognitive phenomena. For example,
Skinner explained drive (motivation) in terms of deprivation and reinforcement
schedules. Skinner (1957) tried to account for verbal learning and language
within the operant conditioning paradigm, although this effort was strongly
rejected by linguists and psycholinguists. Skinner (1971) deals with the issue of
free will and social control.

Application
Operant conditioning has been widely applied in clinical settings (i.e., behavior
modification) as well as teaching (i.e., classroom management) and instructional
development (e.g., programmed instruction). Parenthetically, it should be noted
that Skinner rejected the idea of theories of learning (see Skinner, 1950).

Example
By way of example, consider the implications of reinforcement theory as applied
to the development of programmed instruction (Markle, 1969; Skinner, 1968)

1. Practice should take the form of question (stimulus) – answer (response)


frames which expose the student to the subject in gradual steps
2. Require that the learner make a response for every frame and receive
immediate feedback
3. Try to arrange the difficulty of the questions so the response is always
correct and hence a positive reinforcement
4. Ensure that good performance in the lesson is paired with secondary
reinforcers such as verbal praise, prizes and good grades.
Principles
1. Behavior that is positively reinforced will reoccur; intermittent reinforcement
is particularly effective
2. Information should be presented in small amounts so that responses can
be reinforced (“shaping”)
3. Reinforcements will generalize across similar stimuli (“stimulus
generalization”) producing secondary conditioning
References
 Markle, S. (1969). Good Frames and Bad (2nd Ed.). New York: Wiley.
 Skinner, B.F. (1950). Are theories of learning necessary? Psychological
Review, 57(4), 193-216.
 Skinner, B.F. (1953). Science and Human Behavior. New York: Macmillan.
 Skinner, B.F. (1954). The science of learning and the art of teaching. Harvard
Educational Review, 24(2), 86-97.
 Skinner, B.F. (1957). Verbal Learning. New York: Appleton-Century-Crofts.
 Skinner, B.F. (1968). The Technology of Teaching. New York: Appleton-
Century-Crofts.
 Skinner, B.F. (1971). Beyond Freedom and Dignity. New York: Knopf.
Operant conditioning
From Wikipedia, the free encyclopedia

Jump to navigationJump to search

Operant conditioning Extinction

Punishment
Reinforcement
Decrease
Increase behaviour
behaviour

Negative
Positive Reinforcement
Positive Punishment Punishment
Add appetitive stimulus
Negative Reinforcement Add noxious stimulus Remove appetitive
following correct
following behaviour stimulus
behavior
following behavior

Escape
Active Avoidance
Remove noxious stimulus
Behaviour avoids noxious
following correct
stimulus
behaviour
Operant conditioning (also called instrumental conditioning) is a learning process through which
the strength of a behavior is modified by reinforcement or punishment. It is also a procedure that is
used to bring about such learning.
Although operant and classical conditioning both involve behaviors controlled by environmental
stimuli, they differ in nature. In operant conditioning, stimuli present when a behavior is rewarded or
punished come to control that behavior. For example, a child may learn to open a box to get the
sweets inside, or learn to avoid touching a hot stove; in operant terms, the box and the stove are
"discriminative stimuli". Operant behavior is said to be "voluntary": for example, the child may face a
choice between opening the box and petting a puppy.
In contrast, classical conditioning involves involuntary behavior based on the pairing of stimuli with
biologically significant events. For example, sight of sweets may cause a child to salivate, or the
sound of a door slam may signal an angry parent, causing a child to tremble. Salivation and
trembling are not operants; they are not reinforced by their consequences, and they are not
voluntarily "chosen".
The study of animal learning in the 20th century was dominated by the analysis of these two sorts of
learning,[1] and they are still at the core of behavior analysis.

Contents

 1Historical note
o 1.1Thorndike's law of effect
o 1.2B. F. Skinner
 2Concepts and procedures
o 2.1Origins of operant behavior: operant variability
o 2.2Modifying operant behavior: reinforcement and punishment
o 2.3Stimulus control of operant behavior
o 2.4Behavioral sequences: conditioned reinforcement and chaining
o 2.5Escape and avoidance
o 2.6Operant hoarding
 3Neurobiological correlates
 4Questions about the law of effect
 5Applications
o 5.1Addiction and dependence
o 5.2Animal training
o 5.3Applied behavior analysis
o 5.4Child behaviour – parent management training
o 5.5Economics
o 5.6Gambling – variable ratio scheduling
o 5.7Military psychology
o 5.8Nudge theory
o 5.9Praise
o 5.10Psychological manipulation
o 5.11Traumatic bonding
o 5.12Video games
o 5.13Workplace culture of fear
 6See also
 7References
 8External links
Historical note[edit]
Thorndike's law of effect[edit]
Main article: Law of effect
Operant conditioning, sometimes called instrumental learning, was first extensively studied
by Edward L. Thorndike (1874–1949), who observed the behavior of cats trying to escape from
home-made puzzle boxes.[2] A cat could escape from the box by a simple response such as pulling a
cord or pushing a pole, but when first constrained, the cats took a long time to get out. With repeated
trials ineffective responses occurred less frequently and successful responses occurred more
frequently, so the cats escaped more and more quickly.[2]Thorndike generalized this finding in his law
of effect, which states that behaviors followed by satisfying consequences tend to be repeated and
those that produce unpleasant consequences are less likely to be repeated. In short, some
consequences strengthen behavior and some consequences weaken behavior. By plotting escape
time against trial number Thorndike produced the first known animal learning curves through this
procedure.[3]
Humans appear to learn many simple behaviors through the sort of process studied by Thorndike,
now called operant conditioning. That is, responses are retained when they lead to a successful
outcome and discarded when they do not, or when they produce aversive effects. This usually
happens without being planned by any "teacher", but operant conditioning has been used by parents
in teaching their children for thousands of years.[4]
B. F. Skinner[edit]
Main article: B. F. Skinner

B.F. Skinner (1904–1990) is referred to as the father of operant conditioning, and his work is
frequently cited in connection with this topic. His 1938 book "The Behavior of Organisms: An
Experimental Analysis",[5] initiated his lifelong study of operant conditioning and its application to
human and animal behavior. Following the ideas of Ernst Mach, Skinner rejected Thorndike's
reference to unobservable mental states such as satisfaction, building his analysis on observable
behavior and its equally observable consequences.[6]
Skinner believed that classical conditioning was too simplistic to be used to describe something as
complex as human behavior. Operant conditioning, in his opinion, better described human behavior
as it examined causes and effects of intentional behavior.
To implement his empirical approach, Skinner invented the operant conditioning chamber, or
"Skinner Box", in which subjects such as pigeons and rats were isolated and could be exposed to
carefully controlled stimuli. Unlike Thorndike's puzzle box, this arrangement allowed the subject to
make one or two simple, repeatable responses, and the rate of such responses became Skinner's
primary behavioral measure.[7] Another invention, the cumulative recorder, produced a graphical
record from which these response rates could be estimated. These records were the primary data
that Skinner and his colleagues used to explore the effects on response rate of various
reinforcement schedules.[8] A reinforcement schedule may be defined as "any procedure that delivers
reinforcement to an organism according to some well-defined rule".[9] The effects of schedules
became, in turn, the basic findings from which Skinner developed his account of operant
conditioning. He also drew on many less formal observations of human and animal behavior.[10]
Many of Skinner's writings are devoted to the application of operant conditioning to human
behavior.[11] In 1948 he published Walden Two, a fictional account of a peaceful, happy, productive
community organized around his conditioning principles.[12] In 1957, Skinner published Verbal
Behavior,[13] which extended the principles of operant conditioning to language, a form of human
behavior that had previously been analyzed quite differently by linguists and others. Skinner defined
new functional relationships such as "mands" and "tacts" to capture some essentials of language,
but he introduced no new principles, treating verbal behavior like any other behavior controlled by its
consequences, which included the reactions of the speaker's audience.

Concepts and procedures[edit]


Origins of operant behavior: operant variability[edit]
Operant behavior is said to be "emitted"; that is, initially it is not elicited by any particular stimulus.
Thus one may ask why it happens in the first place. The answer to this question is like Darwin's
answer to the question of the origin of a "new" bodily structure, namely, variation and selection.
Similarly, the behavior of an individual varies from moment to moment, in such aspects as the
specific motions involved, the amount of force applied, or the timing of the response. Variations that
lead to reinforcement are strengthened, and if reinforcement is consistent, the behavior tends to
remain stable. However, behavioral variability can itself be altered through the manipulation of
certain variables.[14]
Modifying operant behavior: reinforcement and punishment[edit]
Main articles: Reinforcement and Punishment (psychology)

Reinforcement and punishment are the core tools through which operant behavior is modified.
These terms are defined by their effect on behavior. Either may be positive or negative.

 Positive reinforcement and negative reinforcement increase the probability of a behavior that
they follow, while positive punishment and negative punishment reduce the probability of
behaviour that they follow.
Another procedure is called "extinction".

 Extinction occurs when a previously reinforced behavior is no longer reinforced with either
positive or negative reinforcement. During extinction the behavior becomes less probable.
Occasional reinforcement can lead to an even longer delay before behavior extinction due to the
learning factor of repeated instances becoming necessary to get reinforcement, when compared
with reinforcement being given at each opportunity before extinction.[15]
There are a total of five consequences.

1. Positive reinforcement occurs when a behavior (response) is rewarding or the behavior is


followed by another stimulus that is rewarding, increasing the frequency of that
behavior.[16] For example, if a rat in a Skinner box gets food when it presses a lever, its rate
of pressing will go up. This procedure is usually called simply reinforcement.
2. Negative reinforcement (a.k.a. escape) occurs when a behavior (response) is followed by
the removal of an aversive stimulus, thereby increasing the original behavior's frequency. In
the Skinner Box experiment, the aversive stimulus might be a loud noise continuously inside
the box; negative reinforcement would happen when the rat presses a lever to turn off the
noise.
3. Positive punishment (also referred to as "punishment by contingent stimulation") occurs
when a behavior (response) is followed by an aversive stimulus. Example: pain from
a spanking, which would often result in a decrease in that behavior. Positive punishment is a
confusing term, so the procedure is usually referred to as "punishment".
4. Negative punishment (penalty) (also called "punishment by contingent withdrawal") occurs
when a behavior (response) is followed by the removal of a stimulus. Example: taking away
a child's toy following an undesired behavior by him/her, which would result in a decrease in
the undesirable behavior.
5. Extinction occurs when a behavior (response) that had previously been reinforced is no
longer effective. Example: a rat is first given food many times for pressing a lever, until the
experimenter no longer gives out food as a reward. The rat would typically press the lever
less often and then stop. The lever pressing would then be said to be "extinguished."
It is important to note that actors (e.g. a rat) are not spoken of as being reinforced, punished, or
extinguished; it is the actions that are reinforced, punished, or extinguished. Reinforcement,
punishment, and extinction are not terms whose use is restricted to the laboratory. Naturally-
occurring consequences can also reinforce, punish, or extinguish behavior and are not always
planned or delivered on purpose.
Schedules of reinforcement[edit]
Schedules of reinforcement are rules that control the delivery of reinforcement. The rules specify
either the time that reinforcement is to be made available, or the number of responses to be made,
or both. Many rules are possible, but the following are the most basic and commonly used[17][8]

 Fixed interval schedule: Reinforcement occurs following the first response after a fixed time has
elapsed after the previous reinforcement. This schedule yields a "break-run" pattern of response;
that is, after training on this schedule, the organism typically pauses after reinforcement, and
then begins to respond rapidly as the time for the next reinforcement approaches.
 Variable interval schedule: Reinforcement occurs following the first response after a variable
time has elapsed from the previous reinforcement. This schedule typically yields a relatively
steady rate of response that varies with the average time between reinforcements.
 Fixed ratio schedule: Reinforcement occurs after a fixed number of responses have been
emitted since the previous reinforcement. An organism trained on this schedule typically pauses
for a while after a reinforcement and then responds at a high rate. If the response requirement is
low there may be no pause; if the response requirement is high the organism may quit
responding altogether.
 Variable ratio schedule: Reinforcement occurs after a variable number of responses have been
emitted since the previous reinforcement. This schedule typically yields a very high, persistent
rate of response.
 Continuous reinforcement: Reinforcement occurs after each response. Organisms typically
respond as rapidly as they can, given the time taken to obtain and consume reinforcement, until
they are satiated.
Factors that alter the effectiveness of reinforcement and punishment[edit]
The effectiveness of reinforcement and punishment can be changed.

1. Satiation/Deprivation: The effectiveness of a positive or "appetitive" stimulus will be


reduced if the individual has received enough of that stimulus to satisfy his/her appetite. The
opposite effect will occur if the individual becomes deprived of that stimulus: the
effectiveness of a consequence will then increase. A subject with a full stomach wouldn't
feel as motivated as a hungry one.[18]
2. Immediacy: An immediate consequence is more effective than a delayed one. If one gives a
dog a treat for sitting within five seconds, the dog will learn faster than if the treat is in
thirty.[19]
3. Contingency: To be most effective, reinforcement should occur consistently after responses
and not at other times. Learning may be slower if reinforcement is intermittent, that is,
following only some instances of the same response. Responses reinforced intermittently
are usually slower to extinguish than are responses that have always been reinforced.[18]
4. Size: The size, or amount, of a stimulus often affects its potency as a reinforcer. Humans
and animals engage in cost-benefit analysis. A smaller amount of food may not, to a rat,
seem a worthwhile reward for an effortful lever press. A pile of quarters from a slot machine
may keep a gambler pulling the lever longer than a single quarter. Most of these factors
serve biological functions. For example, the process of satiation helps the organism maintain
a stable internal environment (homeostasis). When an organism has been deprived of
sugar, for example, the taste of sugar is an effective reinforcer. When the organism's blood
sugar reaches or exceeds an optimum level the taste of sugar becomes less effective or
even aversive.
Shaping[edit]
Main article: Shaping (psychology)

Shaping is a conditioning method much used in animal training and in teaching nonverbal humans. It
depends on operant variability and reinforcement, as described above. The trainer starts by
identifying the desired final (or "target") behavior. Next, the trainer chooses a behavior that the
animal or person already emits with some probability. The form of this behavior is then gradually
changed across successive trials by reinforcing behaviors that approximate the target behavior more
and more closely. When the target behavior is finally emitted, it may be strengthened and
maintained by the use of a schedule of reinforcement.
Noncontingent reinforcement[edit]
Noncontingent reinforcement is the delivery of reinforcing stimuli regardless of the organism's
behavior. Noncontingent reinforcement may be used in an attempt to reduce an undesired target
behavior by reinforcing multiple alternative responses while extinguishing the target response.[20] As
no measured behavior is identified as being strengthened, there is controversy surrounding the use
of the term noncontingent "reinforcement".[21]
Stimulus control of operant behavior[edit]
Main article: Stimulus control
Though initially operant behavior is emitted without an identified reference to a particular stimulus,
during operant conditioning operants come under the control of stimuli that are present when
behavior is reinforced. Such stimuli are called "discriminative stimuli." A so-called "three-term
contingency" is the result. That is, discriminative stimuli set the occasion for responses that produce
reward or punishment. Example: a rat may be trained to press a lever only when a light comes on; a
dog rushes to the kitchen when it hears the rattle of his/her food bag; a child reaches for candy when
s/he sees it on a table.
Discrimination, generalization & context[edit]
Most behavior is under stimulus control. Several aspects of this may be distinguished:

 Discrimination typically occurs when a response is reinforced only in the presence of a specific
stimulus. For example, a pigeon might be fed for pecking at a red light and not at a green light;
in consequence, it pecks at red and stops pecking at green. Many complex combinations of
stimuli and other conditions have been studied; for example an organism might be reinforced on
an interval schedule in the presence of one stimulus and on a ratio schedule in the presence of
another.
 Generalization is the tendency to respond to stimuli that are similar to a previously trained
discriminative stimulus. For example, having been trained to peck at "red" a pigeon might also
peck at "pink", though usually less strongly.
 Context refers to stimuli that are continuously present in a situation, like the walls, tables, chairs,
etc. in a room, or the interior of an operant conditioning chamber. Context stimuli may come to
control behavior as do discriminative stimuli, though usually more weakly. Behaviors learned in
one context may be absent, or altered, in another. This may cause difficulties for behavioral
therapy, because behaviors learned in the therapeutic setting may fail to occur
Behavioral sequences: conditioned reinforcement and chaining [edit]
Most behavior cannot easily be described in terms of individual responses reinforced one by one.
The scope of operant analysis is expanded through the idea of behavioral chains, which are
sequences of responses bound together by the three-term contingencies defined above. Chaining is
based on the fact, experimentally demonstrated, that a discriminative stimulus not only sets the
occasion for subsequent behavior, but it can also reinforce a behavior that precedes it. That is, a
discriminative stimulus is also a "conditioned reinforcer". For example, the light that sets the
occasion for lever pressing may be used to reinforce "turning around" in the presence of a noise.
This results in the sequence "noise – turn-around – light – press lever – food". Much longer chains
can be built by adding more stimuli and responses.
Escape and avoidance[edit]
In escape learning, a behavior terminates an (aversive) stimulus. For example, shielding one's eyes
from sunlight terminates the (aversive) stimulation of bright light in one's eyes. (This is an example of
negative reinforcement, defined above.) Behavior that is maintained by preventing a stimulus is
called "avoidance," as, for example, putting on sun glasses before going outdoors. Avoidance
behavior raises the so-called "avoidance paradox", for, it may be asked, how can the non-
occurrence of a stimulus serve as a reinforcer? This question is addressed by several theories of
avoidance (see below).
Two kinds of experimental settings are commonly used: discriminated and free-operant avoidance
learning.
Discriminated avoidance learning[edit]
A discriminated avoidance experiment involves a series of trials in which a neutral stimulus such as
a light is followed by an aversive stimulus such as a shock. After the neutral stimulus appears an
operant response such as a lever press prevents or terminate the aversive stimulus. In early trials,
the subject does not make the response until the aversive stimulus has come on, so these early
trials are called "escape" trials. As learning progresses, the subject begins to respond during the
neutral stimulus and thus prevents the aversive stimulus from occurring. Such trials are called
"avoidance trials." This experiment is said to involve classical conditioning because a neutral CS
(conditioned stimulus) is paired with the aversive US (unconditioned stimulus); this idea underlies
the two-factor theory of avoidance learning described below.
Free-operant avoidance learning[edit]
In free-operant avoidance a subject periodically receives an aversive stimulus (often an electric
shock) unless an operant response is made; the response delays the onset of the shock. In this
situation, unlike discriminated avoidance, no prior stimulus signals the shock. Two crucial time
intervals determine the rate of avoidance learning. This first is the S-S (shock-shock) interval. This is
time between successive shocks in the absence of a response. The second interval is the R-S
(response-shock) interval. This specifies the time by which an operant response delays the onset of
the next shock. Note that each time the subject performs the operant response, the R-S interval
without shock begins anew.
Two-process theory of avoidance[edit]
This theory was originally proposed in order to explain discriminated avoidance learning, in which an
organism learns to avoid an aversive stimulus by escaping from a signal for that stimulus. Two
processes are involved: classical conditioning of the signal followed by operant conditioning of the
escape response:
a) Classical conditioning of fear. Initially the organism experiences the pairing of a CS with an
aversive US. The theory assumes that this pairing creates an association between the CS and the
US through classical conditioning and, because of the aversive nature of the US, the CS comes to
elicit a conditioned emotional reaction (CER) – "fear." b) Reinforcement of the operant response by
fear-reduction. As a result of the first process, the CS now signals fear; this unpleasant emotional
reaction serves to motivate operant responses, and responses that terminate the CS are reinforced
by fear termination. Note that the theory does not say that the organism "avoids" the US in the sense
of anticipating it, but rather that the organism "escapes" an aversive internal state that is caused by
the CS. Several experimental findings seem to run counter to two-factor theory. For example,
avoidance behavior often extinguishes very slowly even when the initial CS-US pairing never occurs
again, so the fear response might be expected to extinguish (see Classical conditioning). Further,
animals that have learned to avoid often show little evidence of fear, suggesting that escape from
fear is not necessary to maintain avoidance behavior.[22]
Operant or "one-factor" theory[edit]
Some theorists suggest that avoidance behavior may simply be a special case of operant behavior
maintained by its consequences. In this view the idea of "consequences" is expanded to include
sensitivity to a pattern of events. Thus, in avoidance, the consequence of a response is a reduction
in the rate of aversive stimulation. Indeed, experimental evidence suggests that a "missed shock" is
detected as a stimulus, and can act as a reinforcer. Cognitive theories of avoidance take this idea a
step farther. For example, a rat comes to "expect" shock if it fails to press a lever and to "expect no
shock" if it presses it, and avoidance behavior is strengthened if these expectancies are confirmed.[22]
Operant hoarding[edit]
Operant hoarding refers to the observation that rats reinforced in a certain way may allow food
pellets to accumulate in a food tray instead of retrieving those pellets. In this procedure, retrieval of
the pellets always instituted a one-minute period of extinction during which no additional food pellets
were available but those that had been accumulated earlier could be consumed. This finding
appears to contradict the usual finding that rats behave impulsively in situations in which there is a
choice between a smaller food object right away and a larger food object after some delay.
See schedules of reinforcement.[23]

Neurobiological correlates[edit]
Further information: Reward system
The first scientific studies identifying neurons that responded in ways that suggested they encode for
conditioned stimuli came from work by Mahlon deLong[24][25] and by R.T. Richardson.[25] They showed
that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex,
are activated shortly after a conditioned stimulus, or after a primary reward if no conditioned stimulus
exists. These neurons are equally active for positive and negative reinforcers, and have been shown
to be related to neuroplasticityin many cortical regions.[26] Evidence also exists that dopamine is
activated at similar times. There is considerable evidence that dopamine participates in both
reinforcement and aversive learning.[27] Dopamine pathways project much more densely onto frontal
cortex regions. Cholinergic projections, in contrast, are dense even in the posterior cortical regions
like the primary visual cortex. A study of patients with Parkinson's disease, a condition attributed to
the insufficient action of dopamine, further illustrates the role of dopamine in positive
reinforcement.[28] It showed that while off their medication, patients learned more readily with aversive
consequences than with positive reinforcement. Patients who were on their medication showed the
opposite to be the case, positive reinforcement proving to be the more effective form of learning
when dopamine activity is high.
A neurochemical process involving dopamine has been suggested to underlie reinforcement. When
an organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated. This
network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a
global reinforcement signal to postsynaptic neurons."[29] This allows recently activated synapses to
increase their sensitivity to efferent (conducting outward) signals, thus increasing the probability of
occurrence for the recent responses that preceded the reinforcement. These responses are,
statistically, the most likely to have been the behavior responsible for successfully achieving
reinforcement. But when the application of reinforcement is either less immediate or less contingent
(less consistent), the ability of dopamine to act upon the appropriate synapses is reduced.

Questions about the law of effect[edit]


A number of observations seem to show that operant behavior can be established without
reinforcement in the sense defined above. Most cited is the phenomenon of autoshaping(sometimes
called "sign tracking"), in which a stimulus is repeatedly followed by reinforcement, and in
consequence the animal begins to respond to the stimulus. For example, a response key is lighted
and then food is presented. When this is repeated a few times a pigeon subject begins to peck the
key even though food comes whether the bird pecks or not. Similarly, rats begin to handle small
objects, such as a lever, when food is presented nearby.[30][31] Strikingly, pigeons and rats persist in
this behavior even when pecking the key or pressing the lever leads to less food (omission
training).[32][33] Another apparent operant behavior that appears without reinforcement
is contrafreeloading.
These observations and others appear to contradict the law of effect, and they have prompted some
researchers to propose new conceptualizations of operant reinforcement (e.g.[34][35][36]) A more general
view is that autoshaping is an instance of classical conditioning; the autoshaping procedure has, in
fact, become one of the most common ways to measure classical conditioning. In this view, many
behaviors can be influenced by both classical contingencies (stimulus-response) and operant
contingencies (response-reinforcement), and the experimenter's task is to work out how these
interact.[37]
The example of someone having a positive experience with a drug is easy to see how drug
dependence and the law of effect works. The tolerance for a drug goes up as one continues to use it
after having a positive experience with a certain amount the first time.[38] It will take more and more to
get that same feeling. This is when the controlled substance in an experiment would have to be
modified and the experiment would really begin. The law of work for psychologist B. F. Skinner
almost half a century later on the principles of operant conditioning, "a learning process by which the
effect, or consequence, of a response influences the future rate of production of that response.[39]

Applications[edit]
Reinforcement and punishment are ubiquitous in human social interactions, and a great many
applications of operant principles have been suggested and implemented. The following are some
examples.
Addiction and dependence[edit]
Positive and negative reinforcement play central roles in the development and maintenance
of addiction and drug dependence. An addictive drug is intrinsically rewarding; that is, it functions
as a primary positive reinforcer of drug use. The brain's reward system assigns it incentive
salience (i.e., it is "wanted" or "desired"),[40][41][42] so as an addiction develops, deprivation of the drug
leads to craving. In addition, stimuli associated with drug use – e.g., the sight of a syringe, and the
location of use – become associated with the intense reinforcement induced by the
drug.[40][41][42] These previously neutral stimuli acquire several properties: their appearance can induce
craving, and they can become conditioned positive reinforcers of continued use.[40][41][42] Thus, if an
addicted individual encounters one of these drug cues, a craving for the associated drug may
reappear. For example, anti-drug agencies previously used posters with images of drug
paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer
used because of the effects of incentive salience in causing relapse upon sight of the stimuli
illustrated in the posters.
In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in
order to alleviate or "escape" the symptoms of physical dependence (e.g., tremorsand sweating)
and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise
during the state of drug withdrawal.[40]
Animal training[edit]
Main article: Animal training
Animal trainers and pet owners were applying the principles and practices of operant conditioning
long before these ideas were named and studied, and animal training still provides one of the
clearest and most convincing examples of operant control. Of the concepts and procedures
described in this article, a few of the most salient are the following: (a) availability of primary
reinforcement (e.g. a bag of dog yummies); (b) the use of secondary reinforcement, (e.g. sounding a
clicker immediately after a desired response, then giving yummy); (c) contingency, assuring that
reinforcement (e.g. the clicker) follows the desired behavior and not something else; (d) shaping, as
in gradually getting a dog to jump higher and higher; (e) intermittent reinforcement, as in gradually
reducing the frequency of reinforcement to induce persistent behavior without satiation; (f) chaining,
where a complex behavior is gradually constructed from smaller units.[43]
Example of animal training from Seaworld related on Operant conditioning [44]
Animal training has effects on positive reinforcement and negative reinforcement. Schedules of
reinforcements may play a big role on the animal training case.
Applied behavior analysis[edit]
Main article: Applied behavior analysis

Applied behavior analysis is the discipline initiated by B. F. Skinner that applies the principles of
conditioning to the modification of socially significant human behavior. It uses the basic concepts of
conditioning theory, including conditioned stimulus (SC), discriminative stimulus (Sd), response (R),
and reinforcing stimulus (Srein or Sr for reinforcers, sometimes Save for aversive stimuli).[22] A
conditioned stimulus controls behaviors developed through respondent (classical) conditioning, such
as emotional reactions. The other three terms combine to form Skinner's "three-term contingency": a
discriminative stimulus sets the occasion for responses that lead to reinforcement. Researchers
have found the following protocol to be effective when they use the tools of operant conditioning to
modify human behavior:[citation needed]

1. State goal Clarify exactly what changes are to be brought about. For example, "reduce
weight by 30 pounds."
2. Monitor behavior Keep track of behavior so that one can see whether the desired effects
are occurring. For example, keep a chart of daily weights.
3. Reinforce desired behavior For example, congratulate the individual on weight losses. With
humans, a record of behavior may serve as a reinforcement. For example, when a
participant sees a pattern of weight loss, this may reinforce continuance in a behavioral
weight-loss program. However, individuals may perceive reinforcement which is intended to
be positive as negative and vice versa. For example, a record of weight loss may act as
negative reinforcement if it reminds the individual how heavy they actually are. The token
economy, is an exchange system in which tokens are given as rewards for desired
behaviors. Tokens may later be exchanged for a desired prize or rewards such as power,
prestige, goods or services.
4. Reduce incentives to perform undesirable behavior For example, remove candy and
fatty snacks from kitchen shelves.
Practitioners of applied behavior analysis (ABA) bring these procedures, and many variations and
developments of them, to bear on a variety of socially significant behaviors and issues. In many
cases, practitioners use operant techniques to develop constructive, socially acceptable behaviors to
replace aberrant behaviors. The techniques of ABA have been effectively applied in to such things
as early intensive behavioral interventions for children with an autism spectrum
disorder (ASD)[45] research on the principles influencing criminal behavior, HIV
prevention,[46] conservation of natural resources,[47] education,[48] gerontology,[49] health and
exercise,[50] industrial safety,[51] language acquisition,[52] littering,[53]medical
procedures,[54] parenting,[55] psychotherapy,[citation needed] seatbelt use,[56] severe mental
disorders,[57] sports,[58] substance abuse, phobias, pediatric feeding disorders, and zoo management
and care of animals.[59] Some of these applications are among those described below.
Child behaviour – parent management training[edit]
Main article: Parent management training
Providing positive reinforcement for appropriate child behaviors is a major focus of parent
management training. Typically, parents learn to reward appropriate behavior through social rewards
(such as praise, smiles, and hugs) as well as concrete rewards (such as stickers or points towards a
larger reward as part of an incentive system created collaboratively with the child).[60] In addition,
parents learn to select simple behaviors as an initial focus and reward each of the small steps that
their child achieves towards reaching a larger goal (this concept is called "successive
approximations").[60][61]
Economics[edit]
Main article: Behavioral economics
Further information: Consumer demand tests (animals)
Both psychologists and economists have become interested in applying operant concepts and
findings to the behavior of humans in the marketplace. An example is the analysis of consumer
demand, as indexed by the amount of a commodity that is purchased. In economics, the degree to
which price influences consumption is called "the price elasticity of demand." Certain commodities
are more elastic than others; for example, a change in price of certain foods may have a large effect
on the amount bought, while gasoline and other essentials may be less affected by price changes. In
terms of operant analysis, such effects may be interpreted in terms of motivations of consumers and
the relative value of the commodities as reinforcers.[62]
Gambling – variable ratio scheduling[edit]
Main article: Gambling
As stated earlier in this article, a variable ratio schedule yields reinforcement after the emission of an
unpredictable number of responses. This schedule typically generates rapid, persistent responding.
Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-
pulling behavior in gamblers. The variable ratio payoff from slot machines and other forms of
gambling has often been cited as a factor underlying gambling addiction.[63]
Military psychology[edit]
Main article: Military psychology
Human beings have an innate resistance to killing and are reluctant to act in a direct, aggressive
way towards members of their own species, even to save life. This resistance to killing has caused
infantry to be remarkably inefficient throughout the history of military warfare.[64]
This phenomenon was not understood until S.L.A. Marshall (Brigadier General and military historian)
undertook interview studies of WWII infantry immediately following combat engagement. Marshall's
well-known and controversial book, Men Against Fire, revealed that only 15% of soldiers fired their
rifles with the purpose of killing in combat.[65] Following acceptance of Marshall's research by the US
Army in 1946, the Human Resources Research Office of the US Army began implementing new
training protocols which resemble operant conditioning methods. Subsequent applications of such
methods increased the percentage of soldiers able to kill to around 50% in Korea and over 90% in
Vietnam.[64]Revolutions in training included replacing traditional pop-up firing ranges with three-
dimensional, man-shaped, pop-up targets which collapsed when hit. This provided immediate
feedback and acted as positive reinforcement for a soldier's behavior.[66] Other improvements to
military training methods have included the timed firing course; more realistic training; high
repetitions; praise from superiors; marksmanship rewards; and group recognition. Negative
reinforcement includes peer accountability or the requirement to retake courses. Modern military
training conditions mid-brain response to combat pressure by closely simulating actual combat,
using mainly Pavlovian classical conditioning and Skinnerianoperant conditioning (both forms
of behaviorism).[64]
Modern marksmanship training is such an excellent example of behaviorism that it has been used
for years in the introductory psychology course taught to all cadets at the US Military Academy at
West Point as a classic example of operant conditioning. In the 1980s, during a visit to West Point,
B.F. Skinner identified modern military marksmanship training as a near-perfect application of
operant conditioning.[66]
Lt. Col. Dave Grossman states about operant conditioning and US Military training that:
It is entirely possible that no one intentionally sat down to use operant conditioning or behavior
modification techniques to train soldiers in this area…But from the standpoint of a psychologist who
is also a historian and a career soldier, it has become increasingly obvious to me that this is exactly
what has been achieved.[64]
Nudge theory[edit]
Main article: Nudge theory
Nudge theory (or nudge) is a concept in behavioural science, political theory and economics which
argues that indirect suggestions to try to achieve non-forced compliance can influence the motives,
incentives and decision making of groups and individuals, at least as effectively – if not more
effectively – than direct instruction, legislation, or enforcement.
Praise[edit]
Main article: Praise
The concept of praise as a means of behavioral reinforcement is rooted in B.F. Skinner's model of
operant conditioning. Through this lens, praise has been viewed as a means of positive
reinforcement, wherein an observed behavior is made more likely to occur by contingently praising
said behavior.[67] Hundreds of studies have demonstrated the effectiveness of praise in promoting
positive behaviors, notably in the study of teacher and parent use of praise on child in promoting
improved behavior and academic performance,[68][69] but also in the study of work
performance.[70] Praise has also been demonstrated to reinforce positive behaviors in non-praised
adjacent individuals (such as a classmate of the praise recipient) through vicarious
reinforcement.[71] Praise may be more or less effective in changing behavior depending on its form,
content and delivery. In order for praise to effect positive behavior change, it must be contingent on
the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the
particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly.[72]
Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and
cognitive behavioral interventions have incorporated the use of praise in their protocols.[73][74] The
strategic use of praise is recognized as an evidence-based practice in both classroom
management[73] and parenting training interventions,[69] though praise is often subsumed in
intervention research into a larger category of positive reinforcement, which includes strategies such
as strategic attention and behavioral rewards.
Several studies have been done on the effect cognitive-behavioral therapy and operant-behavioral
therapy have on different medical conditions. When patients developed cognitive and behavioral
techniques that changed their behaviors, attitudes, and emotions; their pain severity decreased. The
results of these studies showed an influence of cognitions on pain perception and impact presented
explained the general efficacy of Cognitive-Behavioral therapy (CBT) and Operant-Behavioral
therapy (OBT).
Psychological manipulation[edit]
Main article: Psychological manipulation
Braiker identified the following ways that manipulators control their victims:[75]

 Positive reinforcement: includes praise, superficial charm, superficial sympathy (crocodile tears),
excessive apologizing, money, approval, gifts, attention, facial expressions such as a forced
laugh or smile, and public recognition.
 Negative reinforcement: may involve removing one from a negative situation
 Intermittent or partial reinforcement: Partial or intermittent negative reinforcement can create an
effective climate of fear and doubt. Partial or intermittent positive reinforcement can encourage
the victim to persist – for example in most forms of gambling, the gambler is likely to win now
and again but still lose money overall.
 Punishment: includes nagging, yelling, the silent treatment, intimidation,
threats, swearing, emotional blackmail, the guilt trip, sulking, crying, and playing the victim.
 Traumatic one-trial learning: using verbal abuse, explosive anger, or other intimidating behavior
to establish dominance or superiority; even one incident of such behavior can condition or train
victims to avoid upsetting, confronting or contradicting the manipulator.
Traumatic bonding[edit]
Main article: Traumatic bonding

Traumatic bonding occurs as the result of ongoing cycles of abuse in which the intermittent
reinforcement of reward and punishment creates powerful emotional bonds that are resistant to
change.[76][77]
The other source indicated that [78] 'The necessary conditions for traumatic bonding are that one
person must dominate the other and that the level of abuse chronically spikes and then subsides.
The relationship is characterized by periods of permissive, compassionate, and even affectionate
behavior from the dominant person, punctuated by intermittent episodes of intense abuse. To
maintain the upper hand, the victimizer manipulates the behavior of the victim and limits the victim's
options so as to perpetuate the power imbalance. Any threat to the balance of dominance and
submission may be met with an escalating cycle of punishment ranging from seething intimidation to
intensely violent outbursts. The victimizer also isolates the victim from other sources of support,
which reduces the likelihood of detection and intervention, impairs the victim's ability to receive
countervailing self-referent feedback, and strengthens the sense of unilateral dependency...The
traumatic effects of these abusive relationships may include the impairment of the victim's capacity
for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of
dependence upon the dominating person. Victims also may encounter a variety of unpleasant social
and legal consequences of their emotional and behavioral affiliation with someone who perpetrated
aggressive acts, even if they themselves were the recipients of the aggression. '.
Video games[edit]
Main article: Compulsion loop
The majority[citation needed] of video games are designed around a compulsion loop, adding a type of
positive reinforcement through a variable rate schedule to keep the player playing. This can lead to
the pathology of video game addiction.[79]
Main article: Loot box
As part of a trend in the monetization of video games during the 2010s, some games offered loot
boxes as rewards or as items purchasable by real world funds. Boxes contains a random selection
of in-game items. The practice has been tied to the same methods that slot machines and other
gambling devices dole out rewards, as it follows a variable rate schedule. While the general
perception that loot boxes are a form of gambling, the practice is only classified as such in a few
countries. However, methods to use those items as virtual currency for online gambling or trading for
real world money has created a skin gambling market that is under legal evaluation.[80]
Workplace culture of fear[edit]
Main articles: Culture of fear, Organizational culture, Toxic workplace, and Workplace bullying
Ashforth discussed potentially destructive sides of leadership and identified what he referred to
as petty tyrants: leaders who exercise a tyrannical style of management, resulting in a climate of fear
in the workplace.[81] Partial or intermittent negative reinforcement can create an effective climate of
fear and doubt.[75] When employees get the sense that bullies are tolerated, a climate of fear may be
the result.[82]
Individual differences in sensitivity to reward, punishment, and motivation have been studied under
the premises of reinforcement sensitivity theory and have also been applied to workplace
performance.
One of the many reasons proposed for the dramatic costs associated with healthcare is the practice
of defensive medicine. Prabhu reviews the article by Cole and discusses how the responses of two
groups of neurosurgeons are classic operant behavior. One group practice in a state with restrictions
on medical lawsuits and the other group with no restrictions. The group of neurosurgeons were
queried anonymously on their practice patterns. The physicians changed their practice in response
to a negative feedback (fear from lawsuit) in the group that practiced in a state with no restrictions on
medical lawsuits.[83]

Social Learning Theory (Albert


Bandura)
The social learning theory of Bandura emphasizes the importance of observing
and modeling the behaviors, attitudes, and emotional reactions of others.
Bandura (1977) states: “Learning would be exceedingly laborious, not to mention
hazardous, if people had to rely solely on the effects of their own actions to
inform them what to do. Fortunately, most human behavior is learned
observationally through modeling: from observing others one forms an idea of
how new behaviors are performed, and on later occasions this coded information
serves as a guide for action.” (p22). Social learning theory explains human
behavior in terms of continuous reciprocal interaction between cognitive,
behavioral, an environmental influences. The component processes underlying
observational learning are: (1) Attention, including modeled events
(distinctiveness, affective valence, complexity, prevalence, functional value) and
observer characteristics (sensory capacities, arousal level, perceptual set, past
reinforcement), (2) Retention, including symbolic coding, cognitive organization,
symbolic rehearsal, motor rehearsal), (3) Motor Reproduction, including physical
capabilities, self-observation of reproduction, accuracy of feedback, and (4)
Motivation, including external, vicarious and self reinforcement.

Because it encompasses attention, memory and motivation, social learning


theory spans both cognitive and behavioral frameworks. Bandura’s theory
improves upon the strictly behavioral interpretation of modeling provided by Miller
& Dollard (1941). Bandura’s work is related to the theories
of Vygotsky and Lave which also emphasize the central role of social learning.
Application
Social learning theory has been applied extensively to the understanding of
aggression (Bandura, 1973) and psychological disorders, particularly in the
context of behavior modification (Bandura, 1969). It is also the theoretical
foundation for the technique of behavior modeling which is widely used in training
programs. In recent years, Bandura has focused his work on the concept of self-
efficacy in a variety of contexts (e.g., Bandura, 1997).

Example
The most common (and pervasive) examples of social learning situations are
television commercials. Commercials suggest that drinking a certain beverage or
using a particular hair shampoo will make us popular and win the admiration of
attractive people. Depending upon the component processes involved (such as
attention or motivation), we may model the behavior shown in the commercial
and buy the product being advertised.

Principles
1. The highest level of observational learning is achieved by first organizing
and rehearsing the modeled behavior symbolically and then enacting it
overtly. Coding modeled behavior into words, labels or images results in
better retention than simply observing.
2. Individuals are more likely to adopt a modeled behavior if it results in
outcomes they value.
3. Individuals are more likely to adopt a modeled behavior if the model is
similar to the observer and has admired status and the behavior has
functional value
INSIGHT LEARNING (WOLFGANG
KOHLER – 1925)
Another contribution that provides evidence of cognition in learning is the fascinating study
reported by Kohler (1951) in his book entitled, Mentality of Apes. The study was conducted by
Kohler off the coast of Africa at the anthropoid station maintained by the Prussian Academy of
Science in Tenerife during the years 1913 to 1917. The majority of observations were made in
the first six months of 1914 (p. 7). Kohler’s report on these experiments was published in 1917
in Intelligenzenprüfungen an Anthropoiden. The English version, The Mentality of Apes, was
published in 1925.

Anthropoids were selected as the subjects of Kohler’s experiments both because of their
similarity to man in intelligence and behavior, but simultaneously—and more importantly—
because of their subordinate state of intelligence, which makes it possible to observe in the act
of learning that which is not possible when observing the human adult:

Even assuming that the anthropoid ape behaves intelligently in the sense in which the word is
applied to man, there is yet from the very start no doubt that he remains in this respect far
behind man, becoming perplexed and making mistakes in relatively simple situations; but it is
precisely for this reason that we may, under the simplest conditions, gain knowledge of the
nature of intelligent acts. The human adult seldom performs for the first time in his life tasks
involving intelligence of so simple a nature that they can be easily investigated; and when in
more complicated tasks adult men really find a solution, they can only with difficulty observe
their own procedure. (Kohler, 1951, pp. 1-2)

Kohler operationally defined intelligence as the utilization of roundabout methods—“detours,


roundabout ways, paths or routes, circuitous routes and indirect ways” (p. 11)—to overcome
obstacles:

As experience shows, we do not speak of behaviour as being intelligent, when human beings or
animals attain their objective by a direct unquestionable route which clearly arises naturally out
of their organization. But we tend to speak of “intelligence” when, circumstances having
blocked the obvious course, the human being or animal takes a roundabout path, so meeting
the situation. (Kohler, 1951, pp. 3-4)

All of his experiments were set up in this way, the direct path to the objective—usually a
banana—being blocked, but a roundabout way being left open. Kohler was careful to set his
experiments so as to require something beyond the roundabout way a chimpanzee might take
in its normal behavior:

No one expects a chimpanzee to remain helpless before a horizontal opening in a wall, on the
other side of which his objective lies, and so it makes no impression at all on us when he makes
as horizontal a shape as he can of himself, and thus slips through. It is only when roundabout
methods are tried on the lower animals, and when you see even chimpanzees undecided, nay,
perplexed to the point of helplessness, by a seemingly minor modification of the problem—it is
only then you realize that circuitous methods cannot in general be considered usual and
matter-of-course conduct. (Kohler, 1951, p. 13)

He also set the experiments in such a way as to be able to distinguish between chance behavior
that brings the subject in contact with the objective, and genuine achievement:

As chance can bring the animals into more favourable spots, it will also occasionally happen
that a series of pure coincidences will lead them from their starting-point right up to the
objective, or at least to points from which a straight path leads to the objective. This holds in all
intelligence tests (at least in principle: for the more complex the problem to be solved, the less
likelihood is there that it will be solved wholly by chance); and, therefore, we have not only to
answer the question whether an animal in an experiment will find the roundabout way (in the
wider meaning of the word) at all, we have to add the limiting condition, that results of chance
shall be excluded. (Kohler, 1951, p. 16)

The genuine achievement takes place as a single continuous occurrence, a unity, as it were, in
space as well as in time; in our example as one continuous run, without a second’s stop, right
up to the objective. A successful chance solution consists of an agglomeration of separate
movements, which start, finish, start again, remain independent of one another in direction and
speed, and only in a geometrical summation start at the starting-point, and finish at the
objective. The experiments on hens illustrate the contrast in a particularly striking way, when
the animal, under pressure of the desire to reach the objective, first flies about uncertainly (in
zigzag movements which are shown in Fig. 4a but in not nearly great enough confusion), and
then, if one of these zigzags leads to a favourable place, suddenly rushes along the curve in
one single unbroken run. (Kohler, 1951, pp. 16-17)

To separate problem solving behavior from normal or chance behavior, Kohler designed series
of experiments that required the use of implements such as strings, sticks and boxes in order
to obtain the objective. In these experiments, the banana could not be reached by making a
detour, or by the body of the animal being adapted to the shape of its surroundings, but
instead required the chimpanzee to make use of available objects as intermediaries. For
example, in one series of experiments food was placed outside of the animal’s reach, but a
string was fastened to it, the end of which was placed within reach. In this simple case, none of
the animals ever hesitated to pull the string to draw the food to them (Kohler, 1951, p. 26). In a
more complicated variation, multiple strings were used, sometimes crossing each other, with
only one of the strings attached. In these experiments no conclusion could be drawn as to
whether or not the chimpanzee actually recognized the “right” string or not. Consistently the
animal would take a position behind the bars of the cage as close as possible to the objective,
and begin pulling in rapid succession, starting with the closest string, until the food was
obtained. In another variation, where only one string was used, but was not attached to the
food, only placed in a position closer or farther from it, it was found that the animal would
always pull the string if it visibly touched the objective. If the distance between the objective
and the end of the string was wide, the chimpanzee would generally not pull the string, unless
he was interested in the string itself, or wanted to use it in some other way (p. 30).

In yet another series of experiments, the objective was not connected in any way with the
animals’ room but was only obtainable by means of pulling it in with a stick. Kohler’s
description of one of his subjects, Tschego, is representative of the pattern he observed with
other chimpanzees:

Tschego first tries to reach the fruit with her hand; of course, in vain. She then moves back and
lies down; then she makes another attempt, only to give it up again. This goes on for more than
half-an-hour. Finally she lies down for good, and takes no further interest in the objective. The
sticks might be non-existent as far as she is concerned, although they can hardly escape her
attention as they are in her immediate neighbourhood. But now the younger animals, who are
disporting themselves outside in the stockade, begin to take notice, and approach the objective
gradually. Suddenly Tschego leaps to her feet, seizes a stick, and quite adroitly, pulls the
bananas till they are within reach. In this maneuver, she immediately places the stick on the
farther side of the bananas. She uses first the left arm, then the right, and frequently changes
from one to the other. She does not always hold the stick as a human being would, but
sometimes clutches it as she does her food, between the third and fourth fingers, while the
thumb is pressed against it, from the other side. (Kohler, 1951, pp. 31-32)

Another of Kohler’s examples clearly demonstrated how knowledge of the lay of the land known
beforehand might be used to plan an indirect circuit through it:[1]

One room of the monkey-house has a very high window, with wooden shutters, that looks out
on the playground. The playground is reached from the room by a door, which leads into the
corridor, a short part of this corridor, and a door opening on to the playground. All the parts
mentioned are well known to the chimpanzees, but animals in that room can see only the
interior. I take Sultan with me from another room of the monkey-house, where he was playing
with the others, lead him across the corridor into that room, lean the door to behind us, go with
him to the window, open the wooden shutter a little, throw a banana out, so that Sultan can see
it disappear through the window, but, on account of its height, does not see it fall, and then
quickly close the shutter again (Sultan can only have seen a little of the wire-roof outside).
When I turn round Sultan is already on the way, pushes the door open, vanishes down the
corridor, and is then to be heard at the second door, and immediately after in front of the
window. I find him outside, eagerly starching underneath the window; the banana has happened
to fall into the dark crack between two boxes. Thus not to be able to see the place where the
objective is, and the greater part of the possible indirect way to it, does not seem to hinder a
solution; if the lay of the land be known beforehand, the indirect circuit through it can be
apprehended with ease. (Kohler, 1951, pp. 20-21)
Kohler also found that an increase in motivation could be used to help the tiring chimpanzee
persist and succeed:

The improvement of the objective by the addition of further items is a method which can be
employed over and over again with success when the animal is obviously quite near to a
solution, but, in the case of a lengthy experiment, there is the risk that fatigue will intervene
and spoil the result. (pp. 42-43)

Some of the most well-known experiments in the study were those involving boxes, which the
animals must use in order to obtain access to an objective fastened high above the ground and
unobtainable by any circuitous routes. In setting up these experiments Kohler noted that “the
possibility of utilizing old methods generally inhibits the development of new ones” (p. 39) and
directed that all sticks should be removed before experiments with boxes were conducted. One
such experiment is described as follows:

The six young animals of the station colony were enclosed in a room with perfectly smooth
walls, whose roof—about two metres in height—they could not reach. A wooden box
(dimensions fifty centimetres by forty by thirty), open on one side, was standing about in the
middle of the room, the one open side vertical, and in plain sight. The objective was nailed to
the roof in a corner, about two and a half metres distant from the box. All six apes vainly
endeavored to reach the fruit by leaping up from the ground. Sultan soon relinquished this
attempt, paced restlessly up and down, suddenly stood still in front of the box, seized it, tipped
it hastily straight towards the objective, but began to climb upon it at a (horizontal) distance of
half a metre, and springing upwards with all his force, tore down the banana. About five
minutes had elapsed since the fastening of the fruit; from the momentary pause before the box
to the first bite into the banana, only a few seconds elapsed, a perfectly continuous action after
the first hesitation. Up to that instant none of the animals had taken any notice of the box; they
were all far too intent on the objective; none of the other five took any part in carrying the box;
Sultan performed this feat single-handed in a few seconds. (pp. 39-40)

Not all of the apes employed the boxes so quickly. For example, Koko took several weeks to
learn to use the box (Kohler, 1951, pp. 39-45). However, once he figured it out, and
successfully obtained the banana several times using the box as a platform, he would “turn
towards the box and seize it as soon as anyone came in sight carrying edibles” (p. 45).

Kohler’s (1951) experiments also included situations in which the objective was obtained
through use of a ladder or box brought in by the ape from outside the room in which the
objective had been hung (p. 51); situations in which the apes positioned and climbed swinging
doors to reach the objective (pp. 53-57); and even situations in which the apes used other
apes, their keeper, or the observer as a means to reach the objective (p. 48). There were also
experiments in which the apes could only reach the fruit by moving a large box (pp. 59-66), or
by detouring from their purpose to obtain a stick, a piece of wire, a stone, or a rope that can be
used as a tool (pp. 101- 119). In some situations the apes had to remove stones from boxes to
make them light enough to move (pp. 119-120) or connect two short sticks together to make
one stick long enough to reach the banana (p. 125). Problems were also set in which the apes
must stack multiple boxes on top of each other (pp. 135-154), and combine this with the use
of a reaching stick in order to get the fruit.

The purpose behind all of Kohler’s experiments was to determine whether or not apes “behave
with intelligence and insight” and “to ascertain the degree of relationship between anthropoid
apes and man” (1951, p. 1). His conclusion was that chimpanzees do, in fact, “manifest
intelligent behavior of the general kind familiar in human beings,” so long as the experimental
test are carefully designed to include only those limits of difficulty and functions within which
“the chimpanzee can possiblyshow insight” and cautioned that “in general, the experimenter
should recognize that every intelligence test is a test, not only of the creature examined, but
also of the experimenter himself” (p. 265).

Gestalt psychology or gestaltism (/ɡəˈʃtælt, -ˈʃtɑːlt, -ˈʃtɔːlt, -ˈstɑːlt, -ˈstɔːlt/ gə-SHTALT, -⁠SHTAHLT, -
⁠SHTAWLT, -⁠STAHLT, -⁠STAWLT;[1][2]from German: Gestalt [ɡəˈʃtalt] "shape, form") is a philosophy of
mind of the Berlin School of experimental psychology. Gestalt psychology is an attempt to
understand the laws behind the ability to acquire and maintain meaningful perceptions in an
apparently chaotic world. The central principle of gestalt psychology is that the mind forms a global
whole with self-organizing tendencies through the law of prägnanz.
This principle maintains that when the human mind (perceptual system) forms a percept or "gestalt",
the whole has a reality of its own, independent of the parts. The original famous phrase of Gestalt
psychologist Kurt Koffka, "the whole is something else than the sum of its parts"[3] is often incorrectly
translated[4] as "The whole is greater than the sum of its parts", and thus used when explaining
gestalt theory, and further incorrectly applied to systems theory.[5] Koffka did not like the translation.
"No, what we mean is that the whole is different from the sum of his parts," he said.[6] The whole has
an independent existence.
In the study of perception, Gestalt psychologists claim that perceptions are the products of complex
interactions among various stimuli. Contrary to the behaviorist approach to focusing on stimulus and
response, Gestalt psychologists sought to understand the organization of cognitive processes
(Carlson and Heth, 2010). Our brain is capable of generating whole forms, particularly with respect
to the visual recognition of global figures instead of just collections of simpler and unrelated
elements (points, lines, curves, etc.).
In psychology, gestaltism is often opposed to structuralism. Gestalt theory, it is proposed, allows for
the deconstruction of the whole situation into its elements.[7]

Contents

 1Origins
o 1.1Gestalt therapy
 2Theoretical framework and methodology
 3Support from cybernetics and neurology
 4Properties
o 4.1Reification
o 4.2Multistability
o 4.3Invariance
 5Prägnanz
 6Criticisms
 7Gestalt views in psychology
o 7.1Fuzzy-trace theory
 8Use in design
 9Music
 10Quantum cognition modeling
 11See also
 12References
 13External links

Origins[edit]
The concept of gestalt was first introduced in philosophy and psychology in 1890 by Christian von
Ehrenfels (a member of the School of Brentano). The idea of gestalt has its roots in theories
by David Hume, Johann Wolfgang von Goethe, Immanuel Kant, David Hartley, and Ernst Mach. Max
Wertheimer's unique contribution was to insist that the "gestalt" is perceptually primary, defining the
parts it was composed from, rather than being a secondary quality that emerges from those parts, as
von Ehrenfels's earlier Gestalt-Qualität had been.[citation needed]
Both von Ehrenfels and Edmund Husserl seem to have been inspired by Mach's work Beiträge zur
Analyse der Empfindungen (Contributions to the Analysis of Sensations, 1886), in formulating their
very similar concepts of gestalt and figural moment, respectively. On the philosophical foundations of
these ideas see Foundations of Gestalt Theory (Smith, ed., 1988).
Early 20th century theorists, such as Kurt Koffka, Max Wertheimer, and Wolfgang Köhler (students
of Carl Stumpf) saw objects as perceived within an environment according to all of their elements
taken together as a global construct. This 'gestalt' or 'whole form' approach sought to define
principles of perception—seemingly innate mental laws that determined the way objects were
perceived. It is based on the here and now, and in the way things are seen. Images can be divided
into figure or ground. The question is what is perceived at first glance: the figure in front, or the
background.
These laws took several forms, such as the grouping of similar, or proximate, objects together, within
this global process. Although gestalt has been criticized for being merely descriptive,[8] it has formed
the basis of much further research into the perception of patterns and objects (Carlson et al. 2000),
and of research into behavior, thinking, problem solving and psychopathology.

Gestalt therapy[edit]
The founders of Gestalt therapy, Fritz and Laura Perls, had worked with Kurt Goldstein, a
neurologist who had applied principles of Gestalt psychology to the functioning of the organism.
Laura Perls had been a Gestalt psychologist before she became a psychoanalyst and before she
began developing Gestalt therapy together with Fritz Perls.[9] The extent to which Gestalt psychology
influenced Gestalt therapy is disputed, however. In any case it is not identical with Gestalt
psychology. On the one hand, Laura Perls preferred not to use the term "Gestalt" to name the
emerging new therapy, because she thought that the gestalt psychologists would object to it;[10] on
the other hand Fritz and Laura Perls clearly adopted some of Goldstein's work.[11] Thus, though
recognizing the historical connection and the influence, most gestalt psychologists emphasize that
gestalt therapy is not a form of gestalt psychology.
Mary Henle noted in her presidential address to Division 24 at the meeting of the American
Psychological Association (1975): "What Perls has done has been to take a few terms from Gestalt
psychology, stretch their meaning beyond recognition, mix them with notions—often unclear and
often incompatible—from the depth psychologies, existentialism, and common sense, and he has
called the whole mixture gestalt therapy. His work has no substantive relation to scientific Gestalt
psychology. To use his own language, Fritz Perls has done 'his thing'; whatever it is, it is not Gestalt
psychology"[12] With her analysis however, she restricts herself explicitly to only three of Perls' books
from 1969 and 1972, leaving out Perls' earlier work, and Gestalt therapy in general as a
psychotherapy method.[13]
There have been clinical applications of Gestalt psychology in the psychotherapeutic field long
before Perls'ian Gestalt therapy, in group psychoanalysis (Foulkes), Adlerian individual psychology,
by Gestalt psychologists in psychotherapy like Erwin Levy, Abraham S. Luchins, by Gestalt
psychologically oriented psychoanalysts in Italy (Canestrari and others), and there have been newer
developments foremost in Europe, e.g. Gestalt theoretical psychotherapy.

Theoretical framework and methodology[edit]


The school of gestalt practiced a series of theoretical and methodological principles that attempted
to redefine the approach to psychological research. This is in contrast to investigations developed at
the beginning of the 20th century, based on traditional scientific methodology, which divided the
object of study into a set of elements that could be analyzed separately with the objective of
reducing the complexity of this object.
The theoretical principles are the following:

 Principle of Totality—The conscious experience must be considered globally (by taking into
account all the physical and mental aspects of the individual simultaneously) because the nature
of the mind demands that each component be considered as part of a system of dynamic
relationships.
 Principle of psychophysical isomorphism – A correlation exists between conscious
experience and cerebral activity.
Based on the principles above the following methodological principles are defined:

 Phenomenon experimental analysis—In relation to the Totality Principle any psychological


research should take phenomena as a starting point and not be solely focused on sensory
qualities.
 Biotic experiment—The school of gestalt established a need to conduct real experiments that
sharply contrasted with and opposed classic laboratory experiments. This signified
experimenting in natural situations, developed in real conditions, in which it would be possible to
reproduce, with higher fidelity, what would be habitual for a subject.[14]

Support from cybernetics and neurology[edit]


In the 1940s and 1950s, laboratory research in neurology and what became known
as cybernetics on the mechanism of frogs' eyes indicate that perception of 'gestalts' (in particular
gestalts in motion) is perhaps more primitive and fundamental than 'seeing' as such:
A frog hunts on land by vision... He has no fovea, or region of greatest acuity in vision, upon
which he must center a part of the image... The frog does not seem to see or, at any rate, is
not concerned with the detail of stationary parts of the world around him. He will starve to
death surrounded by food if it is not moving. His choice of food is determined only by size
and movement. He will leap to capture any object the size of an insect or worm, providing it
moves like one. He can be fooled easily not only by a piece of dangled meat but by any
moving small object... He does remember a moving thing provided it stays within his field of
vision and he is not distracted.[15]
The lowest-level concepts related to visual perception for a human being probably differ little
from the concepts of a frog. In any case, the structure of the retina in mammals and
in human beings is the same as in amphibians. The phenomenon of distortion
of perception of an image stabilized on the retina gives some idea of the concepts of the
subsequent levels of the hierarchy. This is a very interesting phenomenon. When a person
looks at an immobile object, "fixes" it with his eyes, the eyeballs do not remain absolutely
immobile; they make small involuntary movements. As a result the image of the object on the
retina is constantly in motion, slowly drifting and jumping back to the point of maximum
sensitivity. The image "marks time" in the vicinity of this point.[16]

Properties[edit]
The key principles of gestalt systems are emergence,
reification, multistability and invariance.[17]

Reification[edit]
See also: Reification (fallacy)

Reification

Reification is the constructive or generative aspect of perception, by which the experienced


percept contains more explicit spatial information than the sensory stimulus on which it is
based.
For instance, a triangle is perceived in picture A, though no triangle is there. In
pictures B and D the eye recognizes disparate shapes as "belonging" to a single shape,
in C a complete three-dimensional shape is seen, where in actuality no such thing is drawn.
Reification can be explained by progress in the study of illusory contours, which are treated
by the visual system as "real" contours.
Multistability[edit]

the Necker cube and the Rubin vase, two examples of multistability

Multistability (or multistable perception) is the tendency of ambiguous perceptual


experiences to pop back and forth unstably between two or more alternative interpretations.
This is seen, for example, in the Necker cube and Rubin's Figure/Vase illusion shown here.
Other examples include the three-legged blivet and artist M. C. Escher's artwork and the
appearance of flashing marquee lights moving first one direction and then suddenly the
other. Again, gestalt does not explain how images appear multistable, only that they do.

Invariance[edit]

Invariance

Invariance is the property of perception whereby simple geometrical objects are recognized
independent of rotation, translation, and scale; as well as several other variations such as
elastic deformations, different lighting, and different component features. For example, the
objects in A in the figure are all immediately recognized as the same basic shape, which are
immediately distinguishable from the forms in B. They are even recognized despite
perspective and elastic deformations as in C, and when depicted using different graphic
elements as in D. Computational theories of vision, such as those by David Marr, have
provided alternate explanations of how perceived objects are classified.
Emergence, reification, multistability, and invariance are not necessarily separable modules
to model individually, but they could be different aspects of a single
unified dynamic mechanism.[18]
Prägnanz[edit]
Main article: Principles of grouping
The fundamental principle of gestalt perception is the law of Prägnanz [de] (in the German
language, pithiness), which says that we tend to order our experience in a manner that is
regular, orderly, symmetrical, and simple. Gestalt psychologists attempt to discover
refinements of the law of Prägnanz, and this involves writing down laws that, hypothetically,
allow us to predict the interpretation of sensation, what are often called "gestalt laws".[19]

Law of proximity

Law of similarity

Law of closure

Law of Symmetry
A major aspect of Gestalt psychology is that it implies that the mind understands external
stimuli as whole rather than the sum of their parts. The wholes are structured and organized
using grouping laws. The various laws are called laws or principles, depending on the paper
where they appear—but for simplicity's sake, this article uses the term laws. These laws
deal with the sensory modality of vision. However, there are analogous laws for other
sensory modalities including auditory, tactile, gustatory and olfactory (Bregman – GP). The
visual Gestalt principles of grouping were introduced in Wertheimer (1923). Through the
1930s and '40s Wertheimer, Kohler and Koffka formulated many of the laws of grouping
through the study of visual perception.

1. Law of Proximity—The law of proximity states that when an individual perceives an


assortment of objects, they perceive objects that are close to each other as forming
a group. For example, in the figure that illustrates the Law of proximity, there are 72
circles, but we perceive the collection of circles in groups. Specifically, we perceive
that there is a group of 36 circles on the left side of the image, and three groups of
12 circles on the right side of the image. This law is often used in advertising logos
to emphasize which aspects of events are associated.[20][21]
2. Law of Similarity—The law of similarity states that elements within an assortment
of objects are perceptually grouped together if they are similar to each other. This
similarity can occur in the form of shape, colour, shading or other qualities. For
example, the figure illustrating the law of similarity portrays 36 circles all equal
distance apart from one another forming a square. In this depiction, 18 of the circles
are shaded dark, and 18 of the circles are shaded light. We perceive the dark circles
as grouped together and the light circles as grouped together, forming six horizontal
lines within the square of circles. This perception of lines is due to the law of
similarity.[21]
3. Law of Closure—The law of closure states that individuals perceive objects such
as shapes, letters, pictures, etc., as being whole when they are not complete.
Specifically, when parts of a whole picture are missing, our perception fills in the
visual gap. Research shows that the reason the mind completes a regular figure
that is not perceived through sensation is to increase the regularity of surrounding
stimuli. For example, the figure that depicts the law of closure portrays what we
perceive as a circle on the left side of the image and a rectangle on the right side of
the image. However, gaps are present in the shapes. If the law of closure did not
exist, the image would depict an assortment of different lines with different lengths,
rotations, and curvatures—but with the law of closure, we perceptually combine the
lines into whole shapes.[20][21][22]
4. Law of Symmetry—The law of symmetry states that the mind perceives objects as
being symmetrical and forming around a center point. It is perceptually pleasing to
divide objects into an even number of symmetrical parts. Therefore, when two
symmetrical elements are unconnected the mind perceptually connects them to
form a coherent shape. Similarities between symmetrical objects increase the
likelihood that objects are grouped to form a combined symmetrical object. For
example, the figure depicting the law of symmetry shows a configuration of square
and curled brackets. When the image is perceived, we tend to observe three pairs
of symmetrical brackets rather than six individual brackets.[20][21]
5. Law of Common Fate—The law of common fate states that objects are perceived
as lines that move along the smoothest path. Experiments using the visual sensory
modality found that movement of elements of an object produce paths that
individuals perceive that the objects are on. We perceive elements of objects to
have trends of motion, which indicate the path that the object is on. The law of
continuity implies the grouping together of objects that have the same trend of
motion and are therefore on the same path. For example, if there are an array of
dots and half the dots are moving upward while the other half are moving
downward, we would perceive the upward moving dots and the downward moving
dots as two distinct units.[23]

6.

Law of good continuation

Law of Continuity—The law of continuity states that elements of objects tend to be


grouped together, and therefore integrated into perceptual wholes if they are aligned
within an object. In cases where there is an intersection between objects, individuals
tend to perceive the two objects as two single uninterrupted entities. Stimuli remain
distinct even with overlap. We are less likely to group elements with sharp abrupt
directional changes as being one object.[20]

7. Law of Good Gestalt—The law of good gestalt explains that elements of objects
tend to be perceptually grouped together if they form a pattern that is regular,
simple, and orderly. This law implies that as individuals perceive the world, they
eliminate complexity and unfamiliarity so they can observe a reality in its most
simplistic form. Eliminating extraneous stimuli helps the mind create meaning. This
meaning created by perception implies a global regularity, which is often mentally
prioritized over spatial relations. The law of good gestalt focuses on the idea of
conciseness, which is what all of gestalt theory is based on. This law has also been
called the law of Prägnanz.[20] Prägnanz is a German word that directly translates to
mean "pithiness" and implies the ideas of salience, conciseness and orderliness.[23]
8. Law of Past Experience—The law of past experience implies that under some
circumstances visual stimuli are categorized according to past experience. If two
objects tend to be observed within close proximity, or small temporal intervals, the
objects are more likely to be perceived together. For example, the English language
contains 26 letters that are grouped to form words using a set of rules. If an
individual reads an English word they have never seen, they use the law of past
experience to interpret the letters "L" and "I" as two letters beside each other, rather
than using the law of closure to combine the letters and interpret the object as an
uppercase U.[23]

Criticisms[edit]
Some of the central criticisms of Gestaltism are based on the preference Gestaltists are
deemed to have for theory over data, and a lack of quantitative research supporting Gestalt
ideas. This is not necessarily a fair criticism as highlighted by a recent collection of
quantitative research on Gestalt perception.[24]
Other important criticisms concern the lack of definition and support for the
many physiological assumptions made by gestaltists[25] and lack of theoretical coherence in
modern Gestalt psychology.[24]
In some scholarly communities, such as cognitive psychology and computational
neuroscience, gestalt theories of perception are criticized for being descriptive rather
than explanatory in nature. For this reason, they are viewed by some as redundant or
uninformative. For example, Bruce, Green & Georgeson[8] conclude the following regarding
gestalt theory's influence on the study of visual perception:
The physiological theory of the gestaltists has fallen by the wayside, leaving us with a set of
descriptive principles, but without a model of perceptual processing. Indeed, some of their
"laws" of perceptual organisation today sound vague and inadequate. What is meant by a
"good" or "simple" shape, for example?

— Bruce, Green & Georgeson, Visual Perception: Physiology, Psychology and Ecology

Gestalt views in psychology[edit]


Gestalt psychologists find it is important to think of problems as a whole. Max Wertheimer
considered thinking to happen in two ways: productive and reproductive.[26]
Productive thinking is solving a problem with insight.
This is a quick insightful unplanned response to situations and environmental interaction.
Reproductive thinking is solving a problem with previous experiences and what is already
known. (1945/1959).
This is a very common thinking. For example, when a person is given several segments of
information, he/she deliberately examines the relationships among its parts, analyzes their
purpose, concept, and totality, he/she reaches the "aha!" moment, using what is already
known. Understanding in this case happens intentionally by reproductive thinking.
Another gestalt psychologist, Perkins, believes insight deals with three processes:

1. Unconscious leap in thinking.[19]


2. The increased amount of speed in mental processing.
3. The amount of short-circuiting that occurs in normal reasoning.[27]
Views going against the gestalt psychology are:

1. Nothing-special view
2. Neo-gestalt view
3. The Three-Process View
Gestalt psychology should not be confused with the gestalt therapy of Fritz Perls, which is
only peripherally linked to gestalt psychology. A strictly gestalt psychology-based therapeutic
method is Gestalt Theoretical Psychotherapy, developed by the German gestalt
psychologist and psychotherapist Hans-Jürgen Walter and his colleagues in Germany,
Austria (Gerhard Stemberger and colleagues) and Switzerland. Other countries, especially
Italy, have seen similar developments.

Fuzzy-trace theory[edit]
Fuzzy-trace theory, a dual process model of memory and reasoning, was also derived from
Gestalt psychology. Fuzzy-trace theory posits that we encode information into two separate
traces: verbatim and gist. Information stored in verbatim is exact memory for detail (the
individual parts of a pattern, for example) while information stored in gist is semantic and
conceptual (what we perceive the pattern to be). The effects seen in Gestalt psychology can
be attributed to the way we encode information as gist.[28][29]

Use in design[edit]
The gestalt laws are used in user interface design. The laws of similarity and proximity can,
for example, be used as guides for placing radio buttons. They may also be used in
designing computers and software for more intuitive human use. Examples include the
design and layout of a desktop's shortcuts in rows and columns.[30]

Music[edit]
An example of the Gestalt movement in effect, as it is both a process and result, is a music
sequence. People are able to recognise a sequence of perhaps six or seven notes, despite
them being transposed into a different tuning or key.[31]

Quantum cognition modeling[edit]


Main article: Quantum cognition § Gestalt perception

Similarities between Gestalt phenomena and quantum mechanics have been pointed out by,
among others, chemist Anton Amann, who commented that "similarities between Gestalt
perception and quantum mechanics are on a level of a parable" yet may give useful insight
nonetheless.[32] Physicist Elio Conte and co-workers have proposed abstract, mathematical
models to describe the time dynamics of cognitive associations with mathematical tools
borrowed from quantum mechanics[33][34] and has discussed psychology experiments in this
context. A similar approach has been suggested by physicists David Bohm, Basil Hiley and
philosopher Paavo Pylkkänen with the notion that mind and matterboth emerge from an
"implicate order".[35][36] The models involve non-commutative mathematics; such models
account for situations in which the outcome of two measurements performed one after the
other can depend on the order in which they are performed—a pertinent feature for
psychological processes, as an experiment performed on a conscious person may influence
the outcome of a subsequent experiment by changing the state of mind of that person.

Constructivist Theory (Jerome


Bruner)
A major theme in the theoretical framework of Bruner is that learning is an active
process in which learners construct new ideas or concepts based upon their
current/past knowledge. The learner selects and transforms information,
constructs hypotheses, and makes decisions, relying on a cognitive structure to
do so. Cognitive structure (i.e., schema, mental models) provides meaning and
organization to experiences and allows the individual to “go beyond the
information given”.

As far as instruction is concerned, the instructor should try and encourage


students to discover principles by themselves. The instructor and student should
engage in an active dialog (i.e., socratic learning). The task of the instructor is to
translate information to be learned into a format appropriate to the learner’s
current state of understanding. Curriculum should be organized in a spiral
manner so that the student continually builds upon what they have already
learned.

Bruner (1966) states that a theory of instruction should address four major
aspects: (1) predisposition towards learning, (2) the ways in which a body of
knowledge can be structured so that it can be most readily grasped by the
learner, (3) the most effective sequences in which to present material, and (4) the
nature and pacing of rewards and punishments. Good methods for structuring
knowledge should result in simplifying, generating new propositions, and
increasing the manipulation of information.

In his more recent work, Bruner (1986, 1990, 1996) has expanded his theoretical
framework to encompass the social and cultural aspects of learning as well as
the practice of law.

Application
Bruner’s constructivist theory is a general framework for instruction based upon
the study of cognition. Much of the theory is linked to child development research
(especially Piaget ). The ideas outlined in Bruner (1960) originated from a
conference focused on science and math learning. Bruner illustrated his theory in
the context of mathematics and social science programs for young children (see
Bruner, 1973). The original development of the framework for reasoning
processes is described in Bruner, Goodnow & Austin (1951). Bruner (1983)
focuses on language learning in young children.
Note that Constructivism is a very broad conceptual framework in philosophy and
science and Bruner’s theory represents one particular perspective.

Example
This example is taken from Bruner (1973):

“The concept of prime numbers appears to be more readily grasped when the
child, through construction, discovers that certain handfuls of beans cannot be
laid out in completed rows and columns. Such quantities have either to be laid
out in a single file or in an incomplete row-column design in which there is always
one extra or one too few to fill the pattern. These patterns, the child learns,
happen to be called prime. It is easy for the child to go from this step to the
recognition that a multiple table , so called, is a record sheet of quantities in
completed mutiple rows and columns. Here is factoring, multiplication and primes
in a construction that can be visualized.”

Principles
1. Instruction must be concerned with the experiences and contexts that
make the student willing and able to learn (readiness).
2. Instruction must be structured so that it can be easily grasped by the
student (spiral organization).
3. Instruction should be designed to facilitate extrapolation and or fill in the
gaps (going beyond the information given).

Bruner
By Saul McLeod, updated 2019

Bruner (1966) was concerned with how knowledge is represented and organized
through different modes of thinking (or representation).
In his research on the cognitive development of children, Jerome Bruner
proposed three modes of representation:
 Enactive representation (action-based)
 Iconic representation (image-based)
 Symbolic representation (language-based)
Bruner's constructivist theory suggests it is effective when faced with new material
to follow a progression from enactive to iconic to symbolic representation; this
holds true even for adult learners.
Bruner's work also suggests that a learner even of a very young age is capable of
learning any material so long as the instruction is organized appropriately, in
sharp contrast to the beliefs of Piaget and other stage theorists.
Bruner's Three Modes of
Representation
Modes of representation are the way in which information or knowledge are
stored and encoded in memory.
Rather than neat age-related stages (like Piaget), the modes of representation are
integrated and only loosely sequential as they "translate" into each other.

Enactive (0 - 1 years)
The first kind of memory. This mode is used within the first year of life
(corresponding with Piaget’s sensorimotor stage). Thinking is based entirely
on physical actions, and infants learn by doing, rather than by internal
representation (or thinking).
It involves encoding physical action based information and storing it in our
memory. For example, in the form of movement as a muscle memory, a baby
might remember the action of shaking a rattle.
This mode continues later in many physical activities, such as learning to ride a
bike.
Many adults can perform a variety of motor tasks (typing, sewing a shirt,
operating a lawn mower) that they would find difficult to describe in iconic
(picture) or symbolic (word) form.

Iconic (1 - 6 years)
Information is stored as sensory images (icons), usually visual ones, like
pictures in the mind. For some, this is conscious; others say they don’t experience
it.
This may explain why, when we are learning a new subject, it is often helpful to
have diagrams or illustrations to accompany the verbal information.
Thinking is also based on the use other mental images (icons), such as hearing,
smell or touch.
Symbolic (7 years onwards)
This develops last. This is where information is stored in the form of a code or
symbol, such as language. This mode is acquired around six to seven years-old
(corresponding to Piaget’s concrete operational stage).
In the symbolic stage, knowledge is stored primarily as words, mathematical
symbols, or in other symbol systems, such as music.
Symbols are flexible in that they can be manipulated, ordered, classified etc., so
the user isn’t constrained by actions or images (which have a fixed relation to that
which they represent).

The Importance of Language


Language is important for the increased ability to deal with abstract concepts.
Bruner argues that language can code stimuli and free an individual from the
constraints of dealing only with appearances, to provide a more complex yet
flexible cognition.
The use of words can aid the development of the concepts they represent and can
remove the constraints of the “here & now” concept. Bruner views the infant as
an intelligent & active problem solver from birth, with intellectual abilities
basically similar to those of the mature adult.

Educational Implications
The aim of education should be to create autonomous learners (i.e., learning to
learn).
For Bruner (1961), the purpose of education is not to impart knowledge, but
instead to facilitate a child's thinking and problem-solving skills which can then
be transferred to a range of situations. Specifically, education should also develop
symbolic thinking in children.
In 1960 Bruner's text, The Process of Education was published. The main
premise of Bruner's text was that students are active learners who construct their
own knowledge.
Bruner (1960) opposed Piaget's notion of readiness. He argued that schools waste
time trying to match the complexity of subject material to a child's cognitive stage
of development.
This means students are held back by teachers as certain topics are deemed too
difficult to understand and must be taught when the teacher believes the child
has reached the appropriate state of cognitive maturity.
Bruner (1960) adopts a different view and believes a child (of any age) is capable
of understanding complex information: 'We begin with the hypothesis that any
subject can be taught effectively in some intellectually honest form to any child at
any stage of development.' (p. 33)
Bruner (1960) explained how this was possible through the concept of the spiral
curriculum. This involved information being structured so that complex ideas
can be taught at a simplified level first, and then re-visited at more complex levels
later on.
Therefore, subjects would be taught at levels of gradually increasing difficultly
(hence the spiral analogy). Ideally, teaching his way should lead to children being
able to solve problems by themselves.
Bruner (1961) proposes that learners’ construct their own knowledge and do this
by organizing and categorizing information using a coding system. Bruner
believed that the most effective way to develop a coding system is to discover it
rather than being told it by the teacher.
The concept of discovery learning implies that students construct their own
knowledge for themselves (also known as a constructivist approach).
The role of the teacher should not be to teach information by rote learning, but
instead to facilitate the learning process. This means that a good teacher will
design lessons that help students discover the relationship between bits of
information.
To do this a teacher must give students the information they need, but without
organizing for them. The use of the spiral curriculum can aid the process
of discovery learning.
Bruner and Vygotsky
Both Bruner and Vygotsky emphasize a child's environment, especially the social
environment, more than Piaget did. Both agree that adults should play an active
role in assisting the child's learning.
Bruner, like Vygotsky, emphasized the social nature of learning, citing that other
people should help a child develop skills through the process of scaffolding.
'[Scaffolding] refers to the steps taken to reduce the degrees of freedom in carrying
out some task so that the child can concentrate on the difficult skill she is in the
process of acquiring' (Bruner, 1978, p. 19).
The term scaffolding first appeared in the literature when Wood, Bruner, and
Ross described how tutors' interacted with a preschooler to help them solve a
block reconstruction problem (Wood et al., 1976).
The concept of scaffolding is very similar to Vygotsky's notion of the zone of
proximal development, and it's not uncommon for the terms to be used
interchangeably.
Scaffolding involves helpful, structured interaction between an adult and a child
with the aim of helping the child achieve a specific goal. The purpose of the
support is to allow the child to achieve higher levels of development by:

1. simplifying the task or idea


2. motivating and encouraging the child
3. Highlighting important task elements or errors
4. Giving models that can be imitated.

Bruner and Piaget


Obviously, there are similarities between Piaget and Bruner, but an
important difference is that Bruner’s modes are not related in terms of which
presuppose the one that precedes it. While sometimes one mode may dominate
in usage, they coexist.
Bruner states that what determines the level of intellectual development is the
extent to which the child has been given appropriate instruction together with
practice or experience.
So - the right way of presentation and the right explanation will enable a child to
grasp a concept usually only understood by an adult. His theory stresses the role
of education and the adult.
Although Bruner proposes stages of cognitive developm
Jerome Seymour Bruner (October 1, 1915 – June 5, 2016) was an American psychologist who
made significant contributions to human cognitive psychology and cognitive learning
theory in educational psychology. Bruner was a senior research fellow at the New York University
School of Law.[1] He received a B.A. in 1937 from Duke University and a Ph.D. from Harvard
University in 1941.[2][3][4][5]He taught and did research at Harvard University, the University of Oxford,
and New York University. A Review of General Psychologysurvey, published in 2002, ranked Bruner
as the 28th most cited psychologist of the 20th century.[6]

Contents

 1Education and early life


 2Career and research
o 2.1Cognitive psychology
o 2.2Developmental psychology
o 2.3Educational psychology
o 2.4Language development
o 2.5Narrative construction of reality
o 2.6Legal psychology
 3Publications
o 3.1Books
o 3.2Selected articles
 4See also
 5References
 6External links

Education and early life[edit]


Bruner was born blind (due to cataracts) on October 1, 1915, in New York City, to Herman and Rose
Bruner, who were Polish Jewish immigrants.[7][8] An operation at age 2 restored his vision. He
received a bachelor's of arts degree in Psychology, in 1937 from Duke University, and went on to
earn a master's degree in Psychology in 1939 and then a doctorate in Psychology in 1941 from
Harvard University.[2] In 1939, Bruner published his first psychological article on the effect
of thymus extract on the sexual behavior of the female rat.[9] During World War II, Bruner served on
the Psychological Warfare Division of the Supreme Headquarters Allied Expeditionary
Force committee under General Dwight D. Eisenhower, researching social psychological
phenomena.[7][10]

Career and research[edit]


In 1945, Bruner returned to Harvard as a psychology professor and was heavily involved in research
relating to cognitive psychology and educational psychology. In 1970, Bruner left Harvard to teach at
the University of Oxford in the United Kingdom. He returned to the United States in 1980, to continue
his research in developmental psychology. In 1991, Bruner joined the faculty at New York University
(NYU), where he taught primarily in the School of Law.[11]
As an adjunct professor at NYU School of Law, Bruner studied how psychology affects legal
practice. During his career, Bruner was awarded honorary doctorates from Yale
University, Columbia University, The New School, the Sorbonne, the ISPA Instituto Universitário, as
well as colleges and universities in such locations as Berlin and Rome, and was a Fellow of
the American Academy of Arts and Sciences.[5] He turned 100 in October 2015[12] and died on June
5, 2016.[4][13]

Cognitive psychology[edit]
Main article: Cognitive psychology
Bruner is one of the pioneers of cognitive psychology in the United States, which began through his
own early research on sensationand perception as being active, rather than passive processes.
In 1947, Bruner published his study Value and Need as Organizing Factors in Perception, in which
poor and rich children were asked to estimate the size of coins or wooden disks the size of
American pennies, nickels, dimes, quarters and half-dollars. The results showed that the value and
need the poor and rich children associated with coins caused them to significantly overestimate the
size of the coins, especially when compared to their more accurate estimations of the same size
disks.[14]
Similarly, another study conducted by Bruner and Leo Postman showed slower reaction times and
less accurate answers when a deck of playing cards reversed the color of the suitsymbol for some
cards (e.g. red spades and black hearts).[15] These series of experiments issued in what some called
the 'New Look' psychology, which challenged psychologists to study not just an organism's response
to a stimulus, but also its internal interpretation.[7] After these experiments on perception, Bruner
turned his attention to the actual cognitions that he had indirectly studied in his perception studies.
In 1956, Bruner published the book A Study of Thinking, which formally initiated the study of
cognitive psychology. Soon afterward Bruner helped found the Harvard Center of Cognitive Studies.
After a time, Bruner began to research other topics in psychology, but in 1990 he returned to the
subject and gave a series of lectures, later compiled into the book Acts of Meaning. In these
lectures, Bruner contested the computer model of the mind, advocating a
more holistic understanding of cognitive processes.

Developmental psychology[edit]
Main article: Developmental psychology
Beginning around 1967, Bruner turned his attention to the subject of developmental psychology and
studied the way children learn. He coined the term "scaffolding" to describe an instructional process
in which the instructor provides carefully programmed guidance, reducing the amount of assistance
as the student progresses through task learning. Bruner suggested that students may experience, or
"represent" tasks in three ways: enactive representation (action-based), iconic representation
(image-based), and symbolic representation (language-based). Rather than neatly delineated
stages, the modes of representation are integrated and only loosely sequential as they "translate"
into each other. Symbolic representation remains the ultimate mode, and it "is clearly the most
mysterious of the three."
Bruner's learning theory suggests that it is efficacious, when faced with new material, to follow a
progression from enactive to iconic to symbolic representation; this holds true even for adult
learners. A true instructional designer, Bruner's work also suggests that a learner (even of a very
young age) is capable of learning any material so long as the instruction is organized appropriately,
in sharp contrast to the beliefs of Piaget and other stage theorists. (Driscoll, Marcy). Like Bloom's
Taxonomy, Bruner suggests a system of coding in which people form a hierarchical arrangement of
related categories. Each successively higher level of categories becomes more specific,
echoing Benjamin Bloom's understanding of knowledge acquisition as well as the related idea
of instructional scaffolding.
In accordance with this understanding of learning, Bruner proposed the spiral curriculum, a teaching
approach in which each subject or skill area is revisited at intervals, at a more sophisticated level
each time. First there is basic knowledge of a subject, then more sophistication is added, reinforcing
principles that were first discussed. This system is used in China and India. Bruner's spiral
curriculum, however, draws heavily from evolution to explain how to learn better and thus it drew
criticism from conservatives. In the United States classes are split by grade—life sciences in 9th
grade, chemistry in 10th, physics in 11th. The spiral teaches life sciences, chemistry, physics all in
one year, then two subjects, then one, then all three again to understand how they mold
together.[16] Bruner also believes learning should be spurred by interest in the material rather than
tests or punishment, since one learns best when one finds the acquired knowledge appealing.

Educational psychology[edit]
Main article: Educational psychology
While Bruner was at Harvard he published a series of works about his assessment of current
educational systems and ways that education could be improved. In 1961, he published the
book Process of Education. Bruner also served as a member of the Educational Panel of the
President's Science Advisory Committee during the presidencies of John F. Kennedy and Lyndon
Johnson. Referencing his overall view that education should not focus merely on memorizing facts,
Bruner wrote in Process of Education that "knowing how something is put together is worth a
thousand facts about it." From 1964–1996 Bruner sought to develop a complete curriculum for the
educational system that would meet the needs of students in three main areas which he called Man:
A Course of Study. Bruner wanted to create an educational environment that would focus on (1)
what was uniquely human about human beings, (2) how humans got that way and (3) how humans
could become more so.[9] In 1966, Bruner published another book relevant to education, Towards a
Theory of Instruction,and then in 1973, another book, The Relevance of Education. Finally, in 1996,
in The Culture of Education, Bruner reassessed the state of educational practices three decades
after he had begun his educational research. Bruner was also credited with helping found the Head
Start early childcare program.[17] Bruner was deeply impressed by his 1995 visit to the preschools
of Reggio Emilia and has established a collaborative relationship with them to improve educational
systems internationally. Equally important was the relationship with the Italian Ministry of Education
which officially recognized the value of this innovative experience.

Language development[edit]
Main article: Language development
In 1972, Bruner was appointed Watts Professor of Experimental Psychology at the University of
Oxford, where he remained until 1980. In his Oxford years, Bruner focused on early language
development. Rejecting the nativist account of language acquisition proposed by Noam Chomsky,
Bruner offered an alternative in the form of an interactionist or social interactionist theory of language
development. In this approach, the social and interpersonal nature of language was emphasized,
appealing to the work of philosophers such as Ludwig Wittgenstein, John L. Austin and John
Searle for theoretical grounding.[citation needed] Following Lev Vygotsky the Russian theoretician of socio-
cultural development, Bruner proposed that social interaction plays a fundamental role in the
development of cognition in general and of language in particular. He emphasized that children learn
language in order to communicate, and, at the same time, they also learn the linguistic code.
Meaningful language is acquired in the context of meaningful parent-infant interaction, learning
"scaffolded" or supported by the child's language acquisition support system (LASS).
At Oxford Bruner worked with a large group of graduate students and post-doctoral fellows to
understand how young children manage to crack the linguistic code, among them Alison
Garton, Alison Gopnik, Magda Kalmar (Kalmár Magda), Alan Leslie, Andrew Meltzoff, Anat
Ninio, Roy Pea, Susan Sugarman,[18] Michael Scaife, Marian Sigman,[19] Kathy Sylva and many
others. Much emphasis was placed on employing the then-revolutionary method of videotaped
home-observations, Bruner showing the way to a new wave of researchers to get out of the
laboratory and take on the complexities of naturally occurring events in a child's life. This work was
published in a large number of journal articles, and in 1983 Bruner published a summary in the
book Child's talk: Learning to Use Language.
This decade of research established Bruner at the helm of the interactionist approach to language
development, exploring such themes as the acquisition of communicative intents and the
development of their linguistic expression, the interactive context of language use in early childhood,
and the role of parental input and scaffolding behavior in the acquisition of linguistic forms. This work
rests on the assumptions of a social constructivist theory of meaning according to which meaningful
participation in the social life of a group as well as meaningful use of language involve an
interpersonal, intersubjective, collaborative process of creating shared meaning. The elucidation of
this process became the focus of Bruner's next period of work.

Narrative construction of reality[edit]


In 1980, Bruner returned to the United States, taking up the position of professor at the New School
for Social Research in New York City in 1981. For the next decade, he worked on the development
of a theory of the narrative construction of reality, culminating in several seminal publications which
contributed to the development of narrative psychology. His book Acts of Meaning has been cited
over 20,000 times followed by Actual Minds, Possible Worlds which has been cited by over 18,000
scholarly publications, making them two of the most influential works of the 20th century. In these
books, Bruner argued that there are two forms of thinking: the paradigmatic and the narrative. The
former is the method of science and is based upon classification and categorisation. The alternative
narrative approach organises everyday interpretations of the world in storied form. The challenge of
contemporary psychology is to understand this everyday form of thinking.[20]

Vous aimerez peut-être aussi