Vous êtes sur la page 1sur 16

1994, 629 133-148

JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR

NUMBER 1

(JULY)

EFFECTS OF SOCIAL CONTEXT, REINFORCER PROBABILITY,


AND REINFORCER MAGNITUDE ON HUMANS' CHOICES
TO COMPETE OR NOT TO COMPETE
DONALD M. DOUGHERTY AND DON R. CHEREK
THE UNIVERSITY OF TEXAS-HOUSTON

In the first two experiments, subjects' choices to earn points (exchangeable for money) either by
competing with a fictitious opponent or by not competing were studied. Buskist, Barry, Morgan, and
Rossi's (1984) competitive fixed-interval schedule was modified to include a second response option,
a noncompetitive fixed-interval schedule. After choosing to enter either option, the opportunity for
reinforcers became available after the fixed-interval's duration had elapsed. Under the no-competition
condition, points were always available after the interval had elapsed. Under the competition condition,
points were available based on a predetermined probability of delivery. Experiments 1 and 2 examined
how reinforcer probabilities and reinforcer magnitudes affected subjects' choices to compete. Several
general conclusions can be made about the results: (a) Strong preferences to compete were observed
at high and moderate reinforcer probabilities; (b) competing was observed even at very low reinforcer
probabilities; (c) response rates were always higher in the competition component than in the nocompetition component; and (d) response rates and choices to compete were insensitive to reinforcermagnitude manipulations. In Experiment 3, the social context of this choice schedule was removed to
determine whether the high levels of competing observed in the first two experiments were due to a
response preference engendered by the social context provided by the experimenters through instructions. In contrast to the first two experiments, these subjects preferred the 60-s fixed-interval schedule
(formerly the no-competition option), indicating that the instructions themselves were responsible for
the preference to compete. This choice paradigm may be useful to future researchers interested in the
effects of other independent variables (e.g., drugs, social context, instructions) on competitive behavior.
Key words: competition, choice, reinforcer probability, reinforcer magnitude, social context, instructions, button press, humans

Operant researchers have studied competi- jects' preference to compete for reinforcers
tive behavior in the laboratory by using either when alternatives are concurrently available.
a single reinforcement schedule (Buskist, Barry, Three choice paradigms have been studied: (a)
Morgan, & Rossi, 1984; Buskist & Morgan, competing or working alone (Schmitt, 1987);
1987; Schmitt, 1987) or a concurrent rein- (b) competing or cooperating (Schmitt, 1976,
forcement schedule in which subjects choose 1987); and (c) competing or sharing (Olvera
between competing and another alternative & Hake, 1976; Hake, Olvera, & Bell, 1975).
(Hake, Olvera, & Bell, 1975; Olvera & Hake, Both the single- and concurrent-schedule ap1976; Schmitt, 1976, 1987). In the single- proaches have proven to be useful in the deschedule case (Buskist et al., 1984; Buskist & termination of variables that control competing
Morgan, 1987), a pair of subjects compete for behavior and choices to compete.
Several generalizations can be made about
a single reinforcer in the absence of other alternatives. Usually a fixed-interval (FI) or the variables that have been found to control
fixed-ratio (FR) schedule is in effect, and the competitive responding in studies using these
1st subject to complete the schedule's require- paradigms. In general, subjects respond at high
ment receives the reinforcer; the other subject rates on all competition schedules including FI
does not receive a reinforcer. In the concurrent- (Buskist et al., 1984), variable-interval (VI)
schedule case, researchers have studied sub- (Schmitt, 1987), and FR schedules (Olvera &
Hake, 1976; Hake, Olvera, & Bell, 1975), with
on the Fl schedules distributed in a
responses
This research was supported by a grant from the Nabreak-and-run
pattern (unlike conventional
M.
Donald
tional Institute on Drug Abuse (DA 05154).
Dougherty was supported by a postdoctoral fellowship human FI schedule performance; see Buskist
(DA 07247) from the National Institute on Drug Abuse. et al., 1984; Weiner, 1964, 1969). A number
Reprint requests should be directed to Don R. Cherek at of other variables affect competitive respondthe Department of Psychiatry and Behavioral Sciences,
University of Texas, Houston Health Science Center, 1300 ing and subjects' preference to compete when
given other response alternatives. Elaborated
Moursund Street, Houston, Texas 77030.
133

DONALD M. DOUGHERTY and DON R. CHEREK


instructions (Buskist et al., 1984), performance feedback (Schmitt, 1984), and special
response-reinforcer requirements (e.g., limited hold; Buskist & Morgan, 1987) facilitate
the acquisition of effective competitive responding, whereas differential-reinforcementof-low-rate (DRL) schedules and Fl schedules
retard the acquisition of effective competitive
responding (Buskist & Morgan, 1987). When
alternatives to competing are available, by increasing the response requirements of competitive FR schedules (Hake, Olvera, & Bell,
1975; Olvera & Hake, 1976), increasing the
number of competitors (Schmitt, 1976), or providing an alternative to work alone for reinforcers (Schmitt, 1987), these alternatives decrease the probability that subjects will choose
the competitive response alternative.
In the present study there were four objectives. The first was to develop a procedure that
could be used to study subjects' choices or preference for competition when given an alternative response option. Previously, when an
alternative to competing has been made available, competitive behavior has declined sharply
across the first few sessions of exposure. In
concurrent-choice studies involving the alternatives of either competing or working alone,
subjects have been found to prefer to work
alone for reinforcers. For example, subjects
chose to work alone in Schmitt's (1976, Experiment 2) experiment when they were given
the choice either to compete on a VI schedule
or to work alone on a less profitable VI schedule. These findings were attributed to a phenomenon that Schmitt characterized as an "escape from, or avoidance of, competition"
(Schmitt, 1976, p. 232); this characterization
was also cited as being consistent with other
findings by Steigleder, Weiss, Balling, Wenninger, and Lombardo (1980) and Steigleder,
Weiss, Cramer, and Feinberg (1978). In addition, others (Hake, Vukelich, & Olvera, 1975;
Olvera & Hake, 1976) have reported sharp
decreases in subjects' preference for competing
after a brief exposure to choice schedules. For
example, in the Olvera and Hake experiment
(1976, see Figure 1), subjects were given the
opportunity to compete or work alone for reinforcers on FR schedules. In the beginning,
subjects chose to compete most of the time but
after a few sessions they chose to work alone
nearly exclusively. Because choices for competition dissipate rapidly, it is very difficult to
study competitive behavior for long periods of

time and to introduce other independent variables.


The second objective was to determine how
sensitive subjects were to changes in reinforcer
probabilities during competition (i.e., "winning") and how these changes would influence
choices to compete. Buskist and Morgan
(1987), using a single reinforcement schedule,
have reported the only study investigating the
role of reinforcement probability on competitive responding. Unlike their conventional
competitive FI schedule, reinforcers were presented according to a predetermined probability. The probability of reinforcement began
at .67 (baseline performance on the competitive FI schedule) and then changed sequentially every 10 min to .45, .61, .28, and .61.
After the initial decrease in reinforcement
probability, rates of responding decreased, but
there was not a systematic relationship between rates of responding and reinforcement
probabilities. As a result, Buskist and Morgan
concluded that intermittently delivered reinforcement was not responsible for the high
response rates or the break-and-run patterns
of responding controlled by their conventional
competitive FI schedule. The reason for manipulating probability in the present experiments was to see how preference for competition in a discrete-choice procedure was
affected by levels of reinforcer probability.
The third objective was to investigate the
effects of reinforcer magnitude on preference
to compete and rates of responding. Little research has been concerned with examining the
effects of reinforcement magnitude on competitive responding. The only related study has
been reported by Schmitt (1976, Experiment
2), who investigated subjects' choices either to
compete or cooperate while manipulating the
reinforcement magnitude in the two alternatives. In his experiment, subjects began in a
condition in which the number of reinforcers
(points) was held constant in one alternative
while the number of reinforcers was systematically lowered and raised in the other alternative. Some subjects began with the number
of reinforcers held constant in the competition
alternative, and others began with the number
of reinforcers held constant in the cooperation
alternative. Choices for either competition or
cooperation were directly related to the reinforcer magnitudes available for each alternative.
The fourth objective was to determine what

CHOICES TO COMPETE
role instructions (or social context) played in
subjects' preference for the competition option.
It became apparent during our initial experiments that probability and reinforcer magnitude could not adequately account for subjects' strong preference for competing. Previous
research has shown that providing either a
social or nonsocial context through instructions
not only can influence responding but also can
modulate the effects of drugs on responding
(Cherek, Spiga, Roache, & Cowan, 1991;
Cherek, Steinberg, Kelly, Robinson, & Spiga,
1990; Kelly & Cherek, 1993).
In the first two experiments, a discrete-choice
procedure was used in which subjects could
choose to earn monetary reinforcers either by
competing with a fictitious opponent or by not
competing. Buskist et al.'s (1984) competitive
FI schedule procedure was modified to include
another response option, a noncompetitive FI
schedule. On either schedule, the subject's first
response after the interval's duration had
elapsed, and the reinforcer became available,
produced a reinforcer. Under no-competition
conditions, reinforcers were always available
after every interval had elapsed; under competition conditions, reinforcers were available
after an interval had elapsed according to a
predetermined probability of delivery. In Experiment 1, interest was in how subjects would
distribute their choices between competing and
not competing when given the opportunity to
compete under three reinforcer-probability
(.25, .50, and .75) conditions. In Experiment
2, interest was in whether or not subjects'
choices to compete were sensitive to gradually
changing reinforcer probabilities that occurred
within a single session at different reinforcer
magnitudes (similar to the procedure used by
Buskist & Morgan, 1987, Experiment 3). And
in Experiment 3, a nonsocial version of our
concurrent-choice schedule was used to see how
subjects would distribute their choices between
the two alternative reinforcement schedules (FI
60 s and probability Fl 30 s) without the stimuli and instructions that provided a social context in the previous experiments.

EXPERIMENT 1
The main purpose of Experiment 1 was to
determine how subjects would distribute their
choices between the competition and no-competition alternatives at different reinforcer
probabilities (i.e., the probability of "winning"

135

in the competition option). Would subjects


prefer to compete or not to compete when the
alternatives were equally profitable, and would
they continue to choose to compete across many
sessions? Three reinforcer probabilities in the
competition component were used: .25, .50,
and .75. These levels were chosen because .50
represented a probability at which the alternatives were, on average, equally profitable,
.25 represented a probability at which the nocompetition component was more profitable,
and the .75 probability represented a probability at which the competition component was
more profitable. Besides choices to compete,
we were also interested in subjects' rates of
responding while competing and not competing. What was expected in this experiment was
that choices to compete would be influenced
by reinforcer probability and that rates of responding would be higher while competing
than while not competing.
METHOD

Subjects
Three males, between 19 and 40 years old,
participated in the experiment. Two subjects
(S-815 and S-816) reported that they had not
participated in research previously, and the
other (S-407) reported previous participation
in a tobacco-cigarette smoking study in our
laboratory several years earlier. These subjects' educational levels ranged between 10 and
12 years.
Apparatus
During experimental sessions, subjects sat
in a sound-attenuating chamber (1.32 m by
1.62 m by 2.23 m) that contained a response
panel and a computer monitor. The response
panel was a metal box (43.2 cm by 26.0 cm
by 10.2 cm) containing three push buttons labeled A, B, and C. The panel's wire lead was
of sufficient length to allow the subject to move
the panel onto his lap or to place it on a shelf
(28.0 cm wide) that extended across the full
length of the wall (83.5 cm above the chamber's floor) in front of the subject's chair. Located just behind this shelf, at the subject's eye
level, was an Apples monochrome monitor.
Also located in the chamber was a ventilation
fan (its noise served to mask extraneous sounds)
and a ceiling light. Experimental events were
controlled and responses were monitored by
an Apple IIGSO computer equipped with an

DONALD M. DOUGHERTY and DON R. CHEREK

told that they would participate in five sessions


per day, each 60 min in duration, scheduled
throughout the day with the first session starting at 8:30 a.m. and the last one ending at
4:00 p.m. During each of these sessions the
subject was told that he could be paired with
different individuals and that these individuals
may change from day to day or even from
session to session. Subjects were also told that
the cumulative number of points earned during
each day's sessions would be exchanged for
cash at the end of each day.
Competition/no-competition schedule. The
competition/no-competition schedule was a
discrete-choice procedure with an initial choice
component, in which the subject could choose
to earn reinforcers by either competing or not
competing. In the initial choice component, the
following instructions appeared in large letters
on the subject's monitor for 10 s: "PRESS
BUTTON A TO COMPETE, OR DO
NOTHING TO NOT COMPETE."
If the subject pressed Button A, the subject
entered the competition component. Once in
that component, a large letter C and the word
"COMPETING" appeared in the bottom left
corner of the monitor. Responses on Button C
were reinforced on an Fl 30-s schedule (on a
probability basis). If a reinforcer was scheduled, and when the subject emitted the response that completed the Fl schedule's requirement, 10 points (equivalent to $0.10) were
added to the subject's counter displayed in the
middle of the monitor. When points were given
the C and "COMPETING" were removed
from the screen, and "YOU WIN" appeared
with the counter on the screen for 5 s. When
points were not given the C and "COMPETING" were removed from the screen, and
"YOU LOSE" appeared with the counter on
the
screen for 5 s. In either case, the feedback
You can choose to earn points, later exchange"YOU WIN" or "YOU LOSE") was
(either
able for money, in one of two ways: (a) you
removed
from the screen after 5 s, and the next
the
can earn points by choosing
competing opFl 30-s schedule was signaled by the reaption and competing with another individual you
pearance of the C and "COMPETING." The
are paired with; or (b) you can earn points by
subject completed this component after four of
choosing the not competing option and working
these Fl 30-s schedules.
by yourself. Every few minutes you will be
given the choice to enter either option. Your
If the subject did not press Button A in the
earnings will be displayed on a counter apinitial choice component before 10 s had
pearing in the middle of your monitor, and the
elapsed, he entered the no-competition comlarge letters appearing on your screen correponent. Once in the no-competition compospond with the buttons that are effective.
nent, a large letter B and the words "NOT
Several other procedural matters were ex- COMPETING" appeared in the bottom right
plained during the interview. Subjects were corner of the monitor. Here two reinforcers

Applied Engineering I/O 32 interfacing system.


Procedure
Subjects were recruited through advertisements in local newspapers as "paid volunteers
for behavioral research"; no information was
given as to the content of the research. Following a preliminary telephone interview and
screening, potential subjects were invited to
come into the laboratory for an in-depth interview. During the interview, subjects were
screened for possible psychiatric or medical
illnesses. This was determined by a general
health survey administered by the interviewer.
Grounds for exclusion included the detection
of any current or past psychiatric disorder or
any physical illness.
An expired air sample was obtained to estimate blood alcohol level using an Intoximeter
Model 3000 III. Urine samples were obtained
from the subject every morning to monitor drug
use. Urine samples were tested using the Enzyme Multiple Immunoassay TechniqueDrug Abuse Urine assay (EMIT d.a.u. by
SYVA Corporation). This procedure allowed
us to screen for cannabinoids, cocaine, barbiturates, benzodiazepines, phenecyclidine, and
opiates as well as approximately 150 other
metabolites of therapeutic agents and drugs of
abuse. If alcohol was detected, the subject was
sent home. If drugs (including alcohol) were
detected on two occasions, the subject's participation was terminated.
Instructions. During the interview, and prior
to the subject's first session, the experimenters
provided a few general instructions. No information about the response contingencies were
provided. These instructions were limited to
the following:

CHOICES TO COMPETE
Table 1
Parameter values of the competition/no-competition schedules used in each of the three experiments.
Competition component

No-competition component
Total per
Reinforcer
component
magnitudes
entry
($)
($)

Probability
conditions

Reinforcer
magnitudes
($)

Average per
component
entry
($)

.25
.50
.75

0.10
0.10
0.10

0.10
0.20
0.30

0.10
0.10
0.10

0.20
0.20
0.20

probability held
constant at .70

0.05
0.10
0.20

0.14
0.28
0.56

0.05
0.10
0.20

0.10
0.20
0.40

probability decreased
from .70 to .10

0.05
0.10
0.20
0.10
0.10
0.10

0.10
0.20
0.30

0.05
0.10
0.20
0.10
0.10
0.10

0.10
0.20
0.40
0.20
0.20
0.20

Experiment
1
2 Baseline sessions

Probe sessions

.25
.50
.75

were always available, one on each of two Fl


60-s schedules of reinforcement. When the
subject emitted a response that completed the
Fl schedule's requirement, 10 points (equivalent to $0.10) were added to the subject's
counter displayed in the middle of the monitor.
When points were given the B and "NOT
COMPETING" were removed from the
screen, and the counter was displayed on the
monitor by itself for 5 s. The next FI 60-s
schedule was signaled by the reappearance of
the B and "NOT COMPETING." The subject completed this component after two of these
Fl 60-s schedules.
At the completion of either component (competition or no-competition), the subject's monitor darkened for 20 s and then returned to
the initial choice component. This schedule
repeated continuously for exactly 60 min, at
which time the session terminated and the
words "SESSION IS OVER" appeared on the
monitor.
Probability of reinforcer manipulations. Three
reinforcer probabilities-.25, .50, and .75
were used in the competition component. A
summary of this schedule's parameter manipulations appears in the top portion of Table
1. Subjects spent a minimum of 15 sessions at
each probability value. Stability was determined using the subject's percentage of choices

to enter the competing component (calculated


using the total number of choice components
completed and the number of times the subject
chose to enter the competing component). A
subject's behavior was considered stable if his
percentage of choices to compete during the
last three sessions did not show an upward or
downward trend and each of these session's
percentage of choices did not differ by more
than 15% of their mean. Once meeting the
stability criterion, subjects were advanced to
another probability value. All subjects experienced the different probability values in the
same order; each began at the .50 value followed by the .25 and .75 values. The .50 condition was used first in order to determine
preference for either component, because at
this value the competition and no-competition
alternatives were equally profitable, on average.
RESULTS
The 3 subjects met the stability requirement
at or shortly after the 15-session exposure criterion (range, 15 to 23 sessions). The number
of sessions of exposure for each subject and
condition appear in Table 2.
Each subject's percentage of choices to compete appears in Figure 1. The reinforcer-probability manipulations made in the competition

DONALD M. DOUGHERTY and DON R. CHEREK


Table 2
The number of sessions of exposure to conditions for subjects in Experiments 1, 2, and 3.

Experiment

Subject

S-407
S-815

S-816

Condition
.25
.50
.75
.25
.50
.75
.25
.50
.75

2 Baseline sessions (p = .70)


S-812
$0.05

S-820
S-847
3

S-885
S-898
S-909

$0.10
$0.20
$0.05
$0.10
$0.20
$0.05
$0.10
$0.20
.25
.50
.75
.25
.50
.75
.25
.50
.75

Sessions
of exposure
23
15
20
17
20
15
16
15
15

33
30
31
30
35
30
31
30
34
15
15
18
15
18
17
16
15
19

component systematically affected subjects'


choices to compete. When the reinforcer probability was at its highest level (.75), all subjects
preferred to compete. When the reinforcer
probability was at .50, 2 of the 3 subjects (S815 and S-816) clearly preferred to compete.
Even when the probability was at its lowest
level (.25) and the no-competition alternative
was more profitable, 2 subjects (S-407 and
S-815) still chose to spend a significant percentage of their session time in the competing
option. These subjects exhibited a bias for the
competing option.
Each subject's rates of responding in the
competition and no-competition components
also appear in Figure 1. Rates of responding
here, as well as in the two experiments that
follow, were calculated using the number of
responses and time spent within a relevant
component. Rates of responding were always
higher in the competition component than in

the no-competition component. Rates of responding, however, differed considerably


among subjects. Two subjects, S-407 and
S-815, emitted high response rates, and the
other subject, S-816, emitted lower response
rates. Reinforcer-probability manipulations,
however, did not systematically affect rates of
responding.
Because subjects on interval-based schedules can, by not responding at the appropriate
times, sometimes produce deviations from programmed probabilities, we compared the programmed and obtained interreinforcer intervals. Although there was some variation from
session to session in the obtained reinforcer
probability, the day's average obtained reinforcer probability was close to the programmed
reinforcer probability.

EXPERIMENT 2
In Experiment 1 choices to compete were
affected by different reinforcer probabilities
that were held constant across a minimum of
15 hour-long sessions. In Experiment 2 we
were interested in determining whether choices
to compete and rates of responding would be
sensitive to reinforcer-probability changes occurring within a single session. To do this, we
modified our competition/no-competition
schedule to include moment-to-moment control over the reinforcer probability in the competition option. The subject's behavior was first
stabilized in a constant reinforcer-probability
condition in which most of the reinforcers were
earned in the competition option, and then a
"probe session" was introduced in which the
probability began at the baseline level and was
then gradually lowered throughout the session.
Of interest were two aspects of sensitivity to
the gradual changes in reinforcement probability: the point at which subjects switched to
the no-competition option and the rates of responding during the probe sessions. This procedure was conducted twice at three different
reinforcer magnitudes. We manipulated reinforcer magnitude because we have previously observed this variable to produce systematic increases in rates of responding and changes
in choices between progressive-ratio and fixedtime schedules (Cherek & Dougherty, in press).
Specifically, we were interested in whether
subjects will choose to spend more time in the
competition option as the reinforcer magnitudes become larger.

139

CHOICES TO COMPETE

S47500

100
E 0

400-

870

350-

~60

~300

~~~~~~~~250

50

~200

0540

1DU

o 30

~20

3ioo

O 10
0

.25

.50

.75

.25

70
060

400
350

40co

3040
o 20
120

5100i
05)
.25

.50

.75

.25

100

S-816
0)90 90~~~~~~~~~~~D25

E 80

.50

.75

-1

~~~~~~~~~~20

co

cL

Co15

05 40
~1

o 30
0) 20
4) 10

DL

Er] o-C~ompetition

2 isoo

1_0

870

.75

~~~~~~ Competition

300
a____250_
2500_

050

o5

.50

S-815

E 80~ ~ ~
Co

S85500
S-810450

100
90

-S-407

cCo
1

05
0

.75
.50
Probabilty of Reinforcement
.25

.75
.50
Probability of Reinforcement

.25

Fig. 1. The mean percentage of choices (between a competition and no-competition option) for each subject are
shown on the left. Each bar in these graphs represents the mean percentage of the subject's total choices to enter the
competition component; the error bars represent the standard error of the mean. In the right panel, each subject's rates
of responding in both the competition and no-competition components are shown for each of these same reinforcerprobability conditions. Data are taken from the last three sessions of exposure.

140

DONALD M. DOUGHERTY and DON R. CHEREK

METHOD
Subjects and Apparatus
Three experimentally naive males, between
19 and 30 years old, participated in this experiment. All were recruited and treated in a
manner identical to that described in Experiment 1. The apparatus used was also identical
to that described previously.

Procedure
All subjects began in a baseline condition in
which the reinforcer probability in the competition component was fixed at .70. This
probability level was selected during a preliminary pilot study for two reasons: (a) This
value reliably produced high levels of competitive responding, and (b) this value allowed
a sufficient range of lower probability for comparisons within a probe session (see below).
After a minimum of 15 sessions, and after
meeting the same stability requirements as in
Experiment 1, a probe session was introduced.
In a probe session, the probability began at
.70 and was decreased by .01 after every minute of the 60-min session (from .70 to .10 in
a session). After exposure to the probe session,
the subject's behavior was again stabilized on
the baseline schedule and another probe session was introduced. This procedure was repeated until subjects experienced two probe
sessions at each of three different reinforcer
magnitudes: $0.05, $0.10, and $0.20. All subjects were exposed to the three reinforcer magnitudes in the following order (twice): $0.10,
$0.20, and $0.05. This order of exposure was
used because initial use of $0.05 might not
maintain participation. The no-competition
schedule was the same as in the previous experiment (two FI 60-s schedules); at any given
time, the reinforcer magnitudes present in this
component were equal to the magnitude of the
reinforcers available in the competing component. A summary of this experiment's parameter manipulations appears in Table 1.
RESULTS
The 3 subjects met the stability requirement
at or shortly after the minimum 15 sessions of

exposure both times the three reinforcer magnitudes were used. The total number of sessions at each reinforcer magnitude ranged from
30 to 35 sessions. The number of sessions of

exposure for each subject and condition appear


in Table 2.
The left panels of Figure 2 show the percentage of time subjects spent in the competing
component during baseline (a mean calculated
using the five sessions previous to the probe
session) and probe sessions at each of the three
reinforcer magnitudes. Results are presented
as the percentage of total FI time in the session
spent in the competition component before
switching from competition to no-competition
during the probe session. Subjects' choices to
compete (or time spent in the competing option) were affected by the gradual changes in
reinforcement probability but were generally
unaffected by changes in reinforcement magnitude. All subjects chose to spend nearly 100%
of their time in the competing component during baseline (with reinforcer probability of .70).
But when a probe session was introduced, subjects eventually switched from the competition
option to the no-competition option. The point
at which subjects did switch, however, differed
considerably among subjects. If subjects were
optimally sensitive to the contingencies, the
most equitable time to switch would be 20 min
into the session. During the first 20 min the
reinforcer probability was higher in the competition component; at 20 min the reinforcement densities were equal; and during the last
40 min the reinforcer probability was lower
in the competition component. Additional time
spent in the competition component beyond the
first 20 min (or 33% of the session's time) did
not maximize the number of reinforcers.
As mentioned above, subjects eventually
switched from competition to no-competition,
but did so long after the point at which switching would have resulted in the maximum number of reinforcers. S-812 made the switch after
approximately 60% of the session time had
elapsed; at this time the reinforcer probability
was .34 (a value half that of the baseline). The
other 2 subjects, S-820 and S-847, despite the
severely reduced number of reinforcers, remained in the competition component for an
average of 73.0% and 87.0% of the probe session; at this point, the reinforcer probabilities
were .26 and .18, respectively, far below the
baseline probability of .70. Once a switch was
made, subjects generally chose to remain in the
no-competition option for the remainder of the
session: Mean time spent in the competition
component after the first switch was made to

100
o0 90
80

S-812

20
18

0n

6
14

70

S2810

8560 c12
1.

*~50
E 40

20
0

1o0 cents

5 cents
.f

100
90

S-820

20
18
S16

~~~80

60

e70 c1

5 cents
1a 0 cents

20 cents

60
C

~ ~ ~ ~ ~ ~Probel1
__Probe_ 2 _

12

BBaseline

S-820

~16_

20 cents

10

40

~~~~~~~~~

0206
200

102
0

0
5 cents 1o 0 cents

5 cents

20 cents

1 0 cents

20 cents

100 S-847
C
.2 90

20
~~~~~~~~~0S-847
18

BO

0'

~14

70
60

(D

40

H30

12

(, 10
CLo4

16

7U

020
0

0
5 cents

1 0 cents

20 cents

Reinforcer Magnitude

5 cents 1 0 cents 20 cents


Reinforcer Magnitude

Fig. 2. The percentages of time each subject spent in the competition option and their rates of responding during
baseline and the two probe sessions in Experiment 2. The reinforcement probability in the competition component
remained at .70 during baseline conditions. In probe sessions the reinforcement probability was gradually lowered
from .70 to .10 (.01 per minute). In the left panel is the percentage of the session's time each subject spent in the
competition component during baseline and probe sessions, and in the right panel are the rates of responding in the
competition component for these same sessions. Four bars appear at each reinforcer magnitude: White bars represent
the mean percentage of time spent in the competing component during baseline, and the solid gray and black bars
represent the percentage of time spent in the competition component during two probe sessions (bars represent one
standard error of the mean).

DONALD M. DOUGHERTY and DON R. CHEREK


the no-competition component averaged less
than 2 min for all subjects (range, 1.4 to 2.3
min).
Rates of responding during the baseline and
probe sessions under the three reinforcer-magnitude conditions are shown in Figure 2. Relative to baseline levels, rates of responding
during probe sessions were higher. Reinforcermagnitude manipulations, however, did not
systematically affect rates of responding. As in
the previous experiment, response rates were
always higher in the competition component
than in the no-competition component (not
shown in Figure 2).

METHOD

Subjects and Apparatus


Three experimentally naive male subjects

were recruited in a manner similar to that used

in the previous experiments. These subjects'


ages ranged between 19 and 21 years old. The
same apparatus was used.
Procedure
The procedures used in this experiment
changed only slightly from those used in Experiments 1 and 2. The instructions and the
schedule were modified to omit the stimuli relevant to the competition context. The words
"COMPETING," "NOT COMPETING,"
"YOU WIN," and "YOU LOSE" were omitted from the monitor, and instead of choosing
to enter the competition or no-competition option, subjects could "PRESS BUTTON A TO
ENTER OPTION C, OR DO NOTHING
TO ENTER OPTION B." In Option B, the
counter was incremented by $0.10 after an FI
60-s schedule was completed; in Option C, the
counter was incremented by $0.10 according
to an FI 30-s probability schedule. When the
response was made that completed an FI
schedule in either component, the letter (either
B or C) was removed from the screen and the
counter appeared on the monitor alone for 5
s. After 5 s elapsed, and if another FI schedule
was to follow, the start of the next Fl schedule
was signaled by the reappearance of the letter
corresponding to the component that the subject was in. As in Experiment 1 (see Table 1)
three reinforcer probabilities (.25, .50, and .75)
were used in Option C. The same stability
criterion and order of exposure were used.
The instructions were similar to those used
in Experiments 1 and 2, but the social context
was removed:

EXPERIMENT 3
In Experiment 3, a nonsocial version of our
discrete-choice schedule was used to see how
subjects would distribute their choices between
the two alternative reinforcement schedules
without the social context present in the previous experiments. The stimuli and instructions referring to other subjects and to competition were omitted. This study examined
whether the differences between the two
schedules used in the competition/no-competition components may have been responsible
for subjects' preference for the component labeled "competing." Subjects may simply prefer
the less predictable outcomes in the competition component over the more predictable outcomes in the no-competition component. In
addition, the difference between the durations
of the two schedules may have contributed to
the observed preference for competition. In the
competition component, a stimulus change occurred after every 30 s; in the no-competition
component, a stimulus change occurred after
every 60 s. To assess possible preference for
either of the schedules, we replicated ExperYou can choose to earn points, later exchangeiment 1 using two reinforcement schedules and
able for money, in one of two ways: (a) you
did not provide instructions relating to the socan earn points by choosing Option C; or (b)
cial context. The reinforcement probability in
you can earn points by choosing Option B. Evthe FI 30-s schedule (formerly the competition
ery few minutes you will be given the choice
component) was either .25, .50, or .75. In this
to enter either option. Your earnings will be
experiment, the most important probability was
displayed on a counter appearing in the middle
.50, because at this level both options were
of your monitor, and the large letters appearing
equally profitable. At the .50 probability level
on your screen correspond with the buttons that
in the previous competition experiments, subare effective.
jects preferred the competition option. In the
RESULTS
absence of competition-related instructions or
stimuli, how would subjects distribute their
The 3 subjects met the stability requirement
choices between the two options?
at or shortly after the minimum 15 sessions of

143

CHOICES TO COMPETE

100
-S-885
90
e80
70
60
50

10

o 30
20
S 10

10"
o

a._I

.25
100

~90

.50

.75
1

cc

EE 70
60
) 50
0
5 40
o 30
20
S 10
.50

.75

.75

Fl 30-s Component

Fl 60-s Component

..~~

.25

.50

.75

50

S-909
,

40

zv 30
' 20
, 10

40

o 30
,20
a

10

a-

60
a) 50
0

S-898 .
-

E]
._

.25

co90
6 80
70

.50

.25

S-898

6 80

100

m
._

40

S-885

c:

10

a O

.25
.50
.75
Probabilty of Reinforcement

.50
.75
.25
Probability of Reinforcement

Fig. 3. The left panel shows the mean percentage of total choices for Option C, in which reinforcers were sometimes
available at the completion of an Fl 30-s schedule and delivered according to a predetermined probability: either .25,
.50, or .75. In the alternative, Option B, reinforcers were always available at the completion of an FI 60-s schedule.
The error bars represent the standard error of the mean. In the right panel, each subject's rate of responding in both
Option C and Option B is shown for each of these same reinforcer-probability conditions.

144

DONALD M. DOUGHERTY and DON R. CHEREK

exposure. The number of sessions of exposure


for each subject and condition appear in Table 2.
The nonsocial version of the competition/
no-competition schedule produced dramatically lower preference for the schedule equivalent to the competition component used in the
previous experiments when the probability was
.50. The left panel of Figure 3 shows that when
the components were equally profitable, the
percentage of choices to enter the FI 30-s component (formerly the competition component)
was at or near zero for all 3 subjects. Because
there were very few, if any, choices to enter
the competition component at the beginning of
this probability condition, the procedure was
slightly modified for 2 of the 3 subjects (S-898
and S-909) to force exposure to the change in
reinforcer probability: When the probability
was changed to .75, Button B was removed for
one session. At the .75 probability level, where
the competition option was more profitable,
subjects entered this component nearly 100%
of the time. At the .25 reinforcer probability
level, where the FI 60-s schedules (formerly
the no-competition component) was more profitable, the percentage of choices to enter the
competing component was at or near zero.
The right panels in Figure 3 show rates of
responding at each of the probability values.
Rates of responding under these nonsocial conditions were higher in whichever schedule was
preferred. This occurred when reinforcement
probability was .25 or .50, but at the .75 probability, subjects responded faster on the schedule similar to the competition component.
These rates of responding were generally lower
than the rates generated-by the competition/
no-competition schedule in Experiments 1
and 2.

DISCUSSION
Using a concurrent-choice procedure with
competition and no-competition response options, we investigated how subjects' choices to
engage in either option were affected by manipulations of reinforcer magnitude and probability (i.e., the probability of "winning" while
competing). Reinforcer probability systematically affected subjects' percentage of choices
to compete. In Experiment 1, three reinforcer
probabilities-.25, .50, and .75-were manipulated in the competition component and were

held constant across many sessions. Choices to


compete were directly affected by the reinforcement probability. The percentage of
choices to compete were highest at the .75
probability and lowest at the .25 probability.
Unexpectedly, we found preference for competing when reinforcement densities between
competition and no-competition options were
comparable (.50 probability). Even at the lowest probability, when it would have been more
profitable to chose the no-competition option,
choices to compete remained at substantial levels for 2 of 3 subjects. In Experiment 2, we
examined the effects of gradual declines within
single probe sessions of the probability of reinforcer delivery in the competition option on
choices to compete or not to compete. The reinforcement probability remained at .70 for
several sessions and then decreased gradually
from .70 to .10 within a periodic probe session.
During baseline conditions at the .70 probability level, the percentage of time spent in the
competition option approached 100%. When
a probe session was introduced, the subjects'
percentage of time spent competing decreased,
indicating a sensitivity to the changes in "winnings." The point at which the switches were
made from competition to no-competition
within probe sessions, however, was always
long after the probability levels favored the nocompetition alternative. Together, Experiments 1 and 2 indicate that reinforcer probabilities can systematically influence choices to
compete. When the average reinforcement
densities in the competition and no-competition alternatives were equally profitable, however, preference for competing was always observed. Substantial preference for the
competition option was observed even when
monetary payoffs favored the no-competition
option.
In previous studies involving similar alternatives, either competition or working alone,
subjects have been found to prefer to work
alone for reinforcers (Hake, Vukelich, & 01vera, 1975; Olvera & Hake, 1976; Schmitt,
1976). These findings were attributed to a phenomenon characterized as an "escape from, or
avoidance of, competition" (Schmitt, 1976, p.
232). Normally, preference for competing
sharply declines after a brief exposure to similar choice schedules without any other other
experimental manipulations (Olvera & Hake,
1976). In marked contrast to these findings,

CHOICES TO COMPETE
most of our subjects not only chose to compete
most of the time but also continued to compete
for many sessions.
Some reasons for these observed differences
may be due to procedural differences. In 01vera and Hake's (1976) study, session-to-session fluctuations in the number of trials won

could differ significantly because it depended


in part on the behavior of the other individual
in the pair. In our study these fluctuations
were kept to a minimum, because reinforcers
were delivered according to a predetermined
probability. The preference for competing we
observed may be due in part to the fictitious
subject providing little variation between
"winning" and "losing." Another difference
was that Olvera and Hake (1976) used a ratio
schedule with an added limited-hold contingency placed on the reinforcer. This limitedhold contingency required higher rates of responding than would normally be required
without the limited hold (Buskist & Morgan,
1987; Olvera & Hake, 1976). In turn, this
added contingency may have made the competing alternative more effortful, and more effortful responding (e.g., increased response requirements, different response topographies)
has been shown to suppress preference for
competing (Olvera & Hake, 1976). This interpretation is consistent with other findings
showing that schedules requiring an increased
effort have aversive properties in competitive
choice paradigms with humans (Schmitt, 1976)
and in simple reinforcement schedules with
nonhumans (Appel, 1963; Azrin, 1961;
Thompson, 1964, 1965). The issue of increased response requirements and more effortful responding may not have been a determining variable in our study. In our study, Fl
schedules were used without a limited-hold
contingency placed on the reinforcer. As a result, FI schedules in our competition component may have not been much more effortful
than the FI schedules in our no-competition
component. On the other hand, although it is
true that most of our subjects' rates of responding controlled by our schedule were lower than
what was required by Schmitt's ratio schedules, more effort was consistently observed in
the competition component, because subjects
emitted higher rates while competing than not
competing. Our study also differs in the degree
of instructional control. Explicit instructions
have been normally provided by other re-

searchers (Olvera & Hake, 1976) about the


response contingencies, but we provided no
such instructions. Although these variables may
account for the differences in our subjects'
preference to compete, it is clear that under
some circumstances competing may be preferred.
In addition to the differences described above,
other particulars of the competition/no-competition schedule and the procedures used may
have influenced subjects' preference for either
alternative. Choice may have been influenced
by the different actions required to enter either
alternative. To enter the competition component, the subject had to emit one response; to
enter the no-competition component, the subject had to refrain from responding for a 10-s
period. In Experiment 2 the sustained preference for the competition component despite
large decreases in the reinforcer was no doubt
influenced by the procedures used. Subjects in
this experiment were exposed to a favorable
probability level (.70) for a minimum of 15
sessions before the probability of "winning"
was reduced within a probe session. This extended exposure to the .70 probability probably contributed to the preference for the competition component during subsequent probe
sessions. Another limitation in Experiment 2
was that only a descending probability sequence was used during the probe sessions.
We used this sequence because, in an earlier
pilot study, we found that subjects were not
sensitive to even dramatic increases from lowto high-probability conditions because they
spent most of their time in the no-competition
component at the low-probability conditions.
(This is similar to the problem encountered in
Experiment 3, in which a button had to be
removed from 2 subjects' response panels to
force exposure to changes in contingencies.)
Future research in this area may be needed to
determine which variables (e.g., the instructions, the role of another subject to compete
against, or the schedule) are responsible for
the observed preferences for competing.
The Role of Social Context in
Preference for Competing
In Experiment 3 an alternative explanation
for the apparent preference for earning reinforcers in the competition component was explored: Preference may be produced by the
instructions about the social context. We stud-

146

DONALD M. DOUGHERTY and DON R. CHEREK

ied 3 additional subjects on a comparable nonsocial schedule that matched our competition/
no-competition schedule in terms of the reinforcement contingencies but differed in terms
of the instructions and stimuli provided. On
this schedule, all discriminative stimuli and
instructions concerning competing and not
competing were omitted. The only stimuli that
appeared on the monitor were large letters,
and the instructions were limited to telling the
subject that he could earn money in either of
two alternatives. Under conditions in which
the alternatives were equally profitable, subjects preferred the former no-competition component.
The dramatically different results obtained
from the social and nonsocial versions of the
competition/no-competition schedule suggest
a powerful effect of the instructions relating
to the social context on choice and response
characteristics. In the social version of this procedure, manipulations of reinforcer probability and reinforcer magnitude alone cannot fully
account for the results observed. Using this
version of the procedure, subjects preferred the
competition option even when it was disadvantageous monetarily in the long run: High
levels of competing were maintained at low
reinforcement probabilities. Alternatively, in
the nonsocial version of this schedule, subjects
preferred the schedule equal to the former nocompetition schedule. This difference illustrates the impact of the social context (or instructions) provided by the experimenters.
As an aside, it is worth noting that the results obtained from Experiments 1 and 2, in
which social instructions were provided, have
much in common with the research examining
preference between variable and fixed schedules of reinforcement. Studies in this area have
demonstrated that organisms show preference
for variable schedules over fixed schedules of
reinforcement, given equal average rates of reinforcement (e.g., Herrnstein, 1964; Killeen,
1968; Trevett, Davison, & Williams, 1972).
The results from Experiment 3 were not consistent with this generalization.
The social context provided by the experimenters has been found to be crucially important in determining patterns of responding
and even in modulating the effects of drugs on
behavior in previous experiments. For example, using an operant paradigm to measure

aggressive behavior, some researchers have


found that the characteristics of responding are
different when instructions are given that provide a social context (interactions with another
individual) than when a nonsocial context is
provided (interaction with a computer) (Cherek
et al., 1990, 1991; Kelly & Cherek, 1993). In
addition, under the social context, aggressive
responding changed as a function of drug dose,
but this change did not occur under the nonsocial context.
In a broader context, many researchers have
shown that instructions can decrease a subject's
sensitivity to the programmed contingencies
(Galizio, 1979; Harzem, Lowe, & Bagshaw,
1978; Kaufman, Baron, & Kopp, 1966; Lowe,
1979; Matthews, Shimoff, Catania, & Sagvolden, 1977; Skinner, 1966, p. 247; Weiner,
1970a, 1970b). In the present series of experiments the same type of insensitivity to the
monetary contingencies was observed, even after weeks of exposure to the contingencies. The
social-context instructions were sufficient to
produce preferences for competing and often
high rates of responding that were resistant to
changes in reinforcer probability, even though
no instructions were provided about the response requirements in these experiments.
Characteristics of
Competitive Responding
The competitive responding generated and
maintained by our competition/no-competition schedule is similar to the responding generated by the competitive Fl schedule, in which
no alternative response options are available
(Buskistet al., 1984; Buskist & Morgan, 1987).
For example, in both the Buskist and Morgan
and the Buskist et al. experiments, moderate
to high rates of responding were controlled by
competitive FI schedules, and low rates of responding were controlled by standard Fl
schedules (similar to our no-competition option). Moderate to high rates of responding
were also controlled by our competition schedule, and lower rates of responding were controlled by our no-competition schedule. This
similarity in competitive responding was
somewhat surprising, considering the differences between the previous competition paradigms and our competition paradigm. One of
the most important differences is that in the
previous experiments the subject was compet-

147

CHOICES TO COMPETE
ing against another individual, but in our experiments the subject was competing against
a fictitious subject.
Besides investigating the effects of reinforcement magnitude and probability on choices to
compete in the first two experiments, we were
also interested in the effects of these two variables on rates of responding. We found that
response rates within subjects were always
higher in the competition component than in
the no-competition component. Between subjects, however, response rates varied considerably. Some subjects responded at high rates,
emitting more than 300 responses per minute,
and some subjects responded at lower rates,
emitting fewer than 25 responses per minute.
The effects of the various manipulations of
reinforcer probability and magnitude on rates
of responding can be summarized as follows:
(a) The reinforcer-probability manipulations
in Experiment 1 had no consistent effect on
rates of responding, and (b) the reinforcermagnitude manipulations in Experiment 2 had
no consistent effect on rates of responding. In
Experiment 3, using the nonsocial version of
the competition/no-competition schedule, no
orderly relationships were found between these
variables and rates of responding.

Conclusions and Some


Possible Directions for
Future Research
Performance under the competition/nocompetition schedule was sensitive to reinforcer-probability manipulations, and stable
baselines were produced. This schedule may
prove to be useful to others interested in competitive behavior and choices to engage in competition. Here we list some of the advantages
of this schedule: (a) It produces stable baselines
(rates of responding and choices to compete);
(b) it is sensitive to changes in reinforcement
probability; (c) probabilities of "winning" can
be precisely controlled by the experimenter;
(d) it requires only 1 subject; and (e) it maintains significant levels of competition that are
amenable to study. A few possible directions
for future research in this area may include
investigations into history effects (of either
"winning" or "losing"), social manipulations,
instructions, and the effects of different drug
classes.

REFERENCES
Appel, J. B. (1963). Aversive aspects of a schedule of
positive reinforcement. Journal of the Experimental
Analysis of Behavior, 6, 423-428.
Azrin, N. H. (1961). Time-out from positive reinforcement. Science, 133, 382-383.
Buskist, W. F., Barry, A., Morgan, D., & Rossi, M.
(1984). Competitive fixed interval performance in humans: Role of "orienting" instructions. The Psychological Record, 34, 241-257.
Buskist, W., & Morgan, D. (1987). Competitive fixedinterval performance in humans. Journal of the Experimental Analysis of Behavior, 47, 145-158.
Cherek, D. R., & Dougherty, D. M. (in press). Motivational effects of marijuana: Humans' choices to work
or not work for reinforcers. Proceedings of the Committee
on Problems of Drug Dependence, NIDA Research Monograph-Problems of Drug Dependence 1993. Washington, DC: U.S. Government Printing Office.
Cherek, D. R., Spiga, R., Roache, J. D., & Cowan, K.
A. (1991). Effects of triazolam on human aggressive,
escape and point-maintained responding. Pharmacology
Biochemistry and Behavior, 40, 835-839.
Cherek, D. R., Steinberg, J. L., Kelly, T. H., Robinson,
D. E., & Spiga, R. (1990). Effects of acute diazepam
and d-amphetamine administration on aggressive and
escape responding of normal male subjects. Psychopharmacology, 100, 173-181.
Galizio, M. (1979). Contingency-shaped and rule-governed behavior: Instructional control of human loss
avoidance. Journal of the Experimental Analysis of Behavior, 31, 450-459.
Hake, D. F., Olvera, D., & Bell,J. C. (1975). Switching
from competition to sharing or cooperation at large
response requirements: Competition requires more responding. Journal of the Experimental Analysis of Behavior, 24, 343-354.
Hake, D., Vukelich, R., & Olvera, D. (1975). The measurement of sharing and cooperation as equity effects
some relationships between them. Journal of the Experimental Analysis of Behavior, 23, 63-79.
Harzem, P., Lowe, C. F., & Bagshaw, M. (1978). Verbal control in human operant behavior. The Psychological Record, 28, 405-423.
Herrnstein, R. J. (1964). Aperiodicity as a factor in
choice. Journal of the Experimental Analysis of Behavior,
7, 179-184.
Kaufman, A., Baron, A., & Kopp, R. E. (1966). Some
effects of instruction on human operant behavior. Psychonomic Monograph Supplements, 1, 243-250.
Kelly, T. H., & Cherek, D. R. (1993). The effects of
alcohol on free-operant aggressive behavior. Journal of
Studies on Alcohol, 11, 40-52.
Killeen, P. (1968). On the measurement of reinforcement frequency in the study of preference. Journal of
the Experimental Analysis of Behavior, 11, 263-269.
Lowe, C. F. (1979). Determinants of human operant
behaviour. In M. D. Zeiler & P. Harzem (Eds.), Advances in analysis of behaviour: Vol. 1. Reinforcement and
the organization of behaviour (pp. 159-192). Chichester,
England: Wiley.
Matthews, B. A., Shimoff, E., Catania, A. C., & Sagvolden, T. (1977). Uninstructed human responding:

148

DONALD M. DOUGHERTY and DON R. CHEREK

Sensitivity to ratio and interval contingencies. Journal


of the Experimental Analysis of Behavior, 27, 453-467.
Olvera, D. R., & Hake, D. F. (1976). Producing a
change from competition to sharing: Effects of large
and adjusting response requirements. Journal of the Experimental Analysis of Behavior, 26, 321-333.
Schmitt, D. R. (1976). Some conditions affecting the
choice to cooperate or compete. Journal of the Experimental Analysis of Behavior, 25, 165-178.
Schmitt, D. R. (1984). Interpersonal relations: Cooperation and competition. Journal of the Experimental
Analysis of Behavior, 42, 377-383.
Schmitt, D. R. (1987). Interpersonal contingencies: Performance differences and cost-effectiveness. Journal of
the Experimental Analysis of Behavior, 48, 221-234.
Skinner, B. F. (1966). An operant analysis of problem
solving, Notes 6.1-6.4. In B. F. Skinner, Contingencies
of reinforcement: A theoretical analysis (pp. 157-171).
New York: Appleton-Century-Crofts.
Steigleder, M. K., Weiss, R. F., Balling, S. S., Wenninger,
V. L., & Lombardo, J. P. (1980). Drivelike motivational properties of competitive behavior. Journal of
Personality and Social Psychology, 38, 93-104.
Steigleder, M. K., Weiss, R. F., Cramer, R. E., & Feinberg, R. A. (1978). Motivating and reinforcing functions of competitive behavior. Journal of Personality and
Social Psychology, 36, 1291-1301.

Thompson, D. M. (1964). Escape from SD associated


with fixed-ratio reinforcement. Journal of the Experimental Analysis of Behavior, 7, 1-8.
Thompson, D. M. (1965). Time-out from fixed-ratio
reinforcement: A systematic replication. Psychonomic
Science, 2, 109-110.
Trevett, A. J., Davison, M. C., & Williams, R. J. (1972).
Performance in concurrent interval schedules. Journal
of the Experimental Analysis of Behavior, 17, 369-374.
Weiner, H. (1964). Conditioning history and human
fixed-interval performance. Journal of the Experimental
Analysis of Behavior, 7, 383-385.
Weiner, H. (1969). Controlling human fixed-interval
performance. Journal of the Experimental Analysis of
Behavior, 12, 349-373.
Weiner, H. (1970a). Human behavioral persistence. The
Psychological Record, 20, 445-456.
Weiner, H. (1970b). Instructional control of human operant responding during extinction following fixed-ratio conditioning. Journal of the Experimental Analysis of
Behavior, 13, 391-394.

Received April 26, 1993


Final acceptance March 14, 1994

Vous aimerez peut-être aussi