Vous êtes sur la page 1sur 15




. In which we try to explain u,hy we consider artificial intelligence to be a subject
most worthy of study, and in which we try to decide what exactly it is, this being a
good thing to decide before embarking.

We call ourselves Homo sapiens-rnan the wise-because our mental capacities are so im-
. portant to us. For thousands of years, we have tried to understand how we thilk; 1|1;at is, how
a mere handful of stuff can perceive, understand, predict, and manipulate a world far larger
ffiflH* and more complicated than itself. The field of artificial intelligence, or AI, goes further still:
it attempts not just to understand but also to build intelligent entities.
AI is one of the newest sciences. Work started in earnest soon after World War II, and
the name itself was coined in 1956. Along with molecular biology, AI is regularly cited as
the "field I would most like to be in" by scientists in other disciplines. A suJent in physics
might reasonably feel that all the good ideas have already been taken by Galileo, Newton,
Einstein, and the rest. AI, on the other hand, still has openings for several
full-time Einsteins.
AI currently encomPasses a huge variety of subfields, ranging from general-purpose
areas, such as learning and perception to such
specific tasks as playing chess, proving math-
emadcal theorems, writing poetry, and diagnosing diseases.
AI systematizes and automates
intellectual tasks and is therefore potendJly relevant
,o uny ,ph"r. of human intellectual
- activity. In this sense, it is truly a universal field.

We have claimed that AI is

exciting, but we have not said what it rs. Definitions of artificial
intelligence according to
eight textbooks are shown in Figure l.l. These definitions vary
along two main dimensions.
Roughly, the ones on top are concerned with thought processes
and reasoning, whereas
the ones on the bottom address behavior. The definitions on the left
measure success in terms
uroenY of fidelity to human perforrnance, whereas the ones on the right
an ideal conceptof inrelligence,.which we will call rationatity. A system is
*tronal if:gainst
it does the ..right thing,,' given what it knows.
2 what is AI?
Chapter l. Introduction Sect

that think like humans O knowledge representation to store what it knows or hears;
Systems that think rationallv [!8HE#o'*
"The exciting new effor1 to make
comput_ "The study of merltal faculties through e automated reasoning to use the stored information to answer questions and to draw
ers think . . . machines with minds, in
the f,lsI8 new conclusions;
the use of computational models."
full and literalsense." (Haugeland, l9g5) (Charniak and McDermon, l9g5) O machine learning to adapt to new circumstances and to detect and extrapolate pattems.
tf FNtrG

"[The automation ofl activities thar we "The study of the computations that make Turing's test deliberately avoided direct physical interaction between the interrogator and the
associate with human thinking, activities
it possible to perceive, reason, and act." computer, because physical simulation of a person is unnecessary for intelligence. However,
such as decision-making, problem solv_ (Winston, 1992) TOTAT TUNi6 IEST the so-called total Tirring Test includes a video signal so that the interrogator can test the
ing, learning..." (Bellman, l97g) subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects "through the hatch." To pass the total Turing Test, the computer will need
Systems tbat act like humans

"The art of creating machines that per_ @ilPUIEB VISION Q computer vision to perceive objects, and
"Computational Intelligence is the study
form functions that require intelligence msollcs O robotics to manipulate objects and move about.
of the design of intelligent agents." (poole
when performed by people." (Kurzweil, et al.,1998) These six disciplines compose most of AI, and Turing deserves credit for designing a test
r990) that remains relevant 50 years later. Yet AI researchers have devoted little effort to passing
"The study of how to make computers do the Turing test, believing that it is more important to study the underlying principles of in-
"AI is concerned with intelligent
things at which, at the moment, people are
telligence rhan to duplicate an exemplar. The quest for "artificial flight" succeeded when the
havior in artifacts." (Nilsson, l99g)
better." (Rich and lfuight, l99l) Wright brothers and orhers stopped imitating birds and learned about aerodynamics. Aero'
naurical engineering texts do not define the goal of their field as making "machines that fly
Figure l-l Some definitions of artificiar interigence, organized
into four ."*ffi so exactly like pigeons that they can fool even other pigeons-"

Historically, all four approaches to AI have been Thinking humanly: The cognitive modeling approach
followed. As one might expect, a
tension exists between approaches centered around
humans and approaches centered around If we are going to say that a given program thinks like a human, we must have some way of
rationality'l A human-centered approach must be
an empirical science, involving hypothesis determining how humans think. We need to get inside the actual workings of human minds.
and experimental confirmation. A rationalist
approach involves a combination of mathemat- There are two ways to do this: through introspection-trying to catch our own thoughts as
ics and engineering- Each group has both aispaiageo
and helped the other. Let us look at the they go by-and through psychological experiments. Once we have a sufficiently precise
four approaches in more detail.
theory of the mind, it becomes possible to express the theory as a computer program- If the
progmm's input/output and timing behaviors match corresponding human behaviors, that is
Acting humanly: The Tirring Test approach
evidence that some of the program's mechanisms could also be operating in humans. For ex-
lu'tf*TEsr Tbe Thring Test, proposed by Alan Tiuing (1950), ample, Allen Newell and Herbert Simon, who developed GPS, the "General Problem Solve/l
was designed to provide a satisfactory
operational definition of intelligence, Rather than
proposing u t,rng and'perhaps controversial (Newell and Simon, 196l), were not content to have their program solve problems conectly.
list of qualifications required for intelligence, he
suggested a test based on indistinguishability They were more concerned with comparing the trace of its reasoning steps to traces of human
from undeniably intelligent entities-human ueingilme SCIENC subjects solving the same problems. The interdisciplinary field of cognitive science brings
compurcr passes the test if a human
interrogator, after posing some wriften questionslcannot
tell whether the written responses together computer models from AI and experimental techniques from psychology to try to
come from a person or not. chapter 26 discusses construct precise and testable theories of the workings of the human mind'
the details of the test and. whether a
is really intelligent if it passes. For now, we
note that programming a computer to pass the tesr Cognitive science is a fascinating field, worthy of an encyclopedia in itself (Wilson
provides plenry to work on- The computer would
need to possess the following capabilities: and Keil, 1999). We will not attempt to describe what is known of human cognition in this
lmslilc*c 0 natural language processing book. We will occasionally comment on similarities or differences between AI techniques
to enable it to communicate successfiilly in English.
and human cognition. Real cognitive science, however, is qecessarily based on experimental
r we should poinr out that, by distinguishing between
human and rationalbehavior, we are not suggesting rhar investigation of actual humans or animals, and we assume that the reader has access only to
humans are necessarily "irrational" in the sense
of'emotionally unstable,, or ..insane.,, one merely need note a computer for experimentation.
that we are not perfect: we are not all chess grandmasters,
even those of us who know all the rules of chess; and,
gers an A on the exam. Some svstemati. In the early days of AI there was often confusion between the approaches: an author
ffit#"1t:tr';:1,;#-" "rroo
in human reasoning are caatoged bI ,.
would arsue that an alsorithm ne.rforms well on a task.and rhat it is thercfnrp a scffl model
1 Chapter l. Introductioq
of hurnan perfclrnrance, or vice versa. Modenr authors separate the two kinds of claims: All the skills needed for the Turing Test are there to allow rational actions. Thus, we
this distinction has allowed both AI and cognitive science to develop more rapidly. The twq need the ability to represent knowledge and reason with it because this enables us to reach
fields continue to flertilize each other, especially in the areas of vision and natural language, good decisions in a wide variety of situations. We need to be able to generate comprehensible
Vision in particular has recently made advances via an integrated approach that considers sentences in natural language becauSe saying those sentences helps us get by in a complex
neurophysiological evidence and computational models. society. We need learning not just for erudition, but because having a better idea of how the
world works enables us to generate more effective strategies for dealing with it. We need
Thinking rationally: The ..laws of thought" approach
visual perception not just because seeing is fun, but to get a better idea of what an action
The Greek philosopher Aristotle was one of the first to anempt ro codify "right thinking," might acbieve-for ex4pple, being able to see a tasty morsel helps one to move toward it.
is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures For these reasons, the study of AI as rational-agent design has at least two advantages.
that always yielded colrect conclusions when given correct premises-for example, "socrates First, it is more general than the "laws of thought" approach, because colrect inference is just
is a man; all men are mortal; therefore, Socrates is mortal." These laws of thought were one of several possible mechanisms for achieving rationality. Second, it is more amenable to
supposed to govern the operation of the mind; their study iniriated the field called logic. scientific development than are approaches based on human behavior or human thought be-
Logicians in the l9th century developed a precise notation for statements about all kinds cause'the standard of rationality is clearly defined and completely general. Human behavior,
of things in the world and about the relations among them. (Contrast this with ordinary arith- on the other hand, is well-adapted for one specific environment and is the producl in part,
metic notation, which provides mainly for equality and inequality statements about numbers.) of a complicated and largely unknown evolutidnary process that still is far from producing
By 1965' programs existed that could, in principle, solve any solvable problem described perfection. This book will therefore concentrate on general principles of rational agents and
logical notation.2 The so-called logicist tradition within artificial intelligence hopes to build on components for constructing them,. We will see that despite the apparent simplicity with
on such programs to create intelligent Systems. which the problem can be stated, an enonnous variety of issues come up when we try to solve
There are two main obstacles to this approach. First, it is not easy to take informal it. Chapter 2 outlines some of these issues in more detail-
knowledge and state it in the formal terms required by logical notation, particularly when the One important point to keep in mind: Wewill see before too long that achieving perfect
knowledge is less than l00o/o certain Second, there is a big difference between being able ro rationality-always doing the right thing-is not feasible in complicated environments. The
solve a problem "in principle" and doing so in practice. Even problems with just a few computational demands are just too high. For most of the book, however, we will adopt the
facts can exhaust the computational resources of any computer unless it has some guidance working hypothesis that perfect rationality is a good starting point for analysis. It simplifies
as to which reasoning steps to try first. Atthough both of these obstacles the problem and provides the appropriate sening for most of the foundational material in
apply to any attempt
to build computational reasoning systems, they appeared first in the logicist tradition.
MI'IOITAUTY the field. Chaprers 6 and l7 deal explicitly with the issue of limited rationality-acting
appropriately when there is not enough time to do all the computations one might like.
Acting rationally: The rational agent approach
An agent is just something that acts(agent cornes from the Latin agere,todo). But computer CIAL INTN
agents are expected to have other attributes that distinguish them from mere '.programs,"
such as operating under autonomous control, perceiving their environment, persisting
over a
prolonged time period, adapting to change, and being capable oftaking on anorher,s ln this section, we provide a brief history of the disciplines that contributed ideas, viewpoints,
goals. A
RATIONAI- AGENT rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, and techniques to AI. Like any history this one is forced to concentrate on a small number
the best expected outcome. of people, events, and ideas and to ignore others that also were important. We organize the
In the "laws of thought" approach to AI, the emphasis was on corect inferences. Mak- history around a series of questions. We certainly would not wish to give the impression that
ing corect inferences is sometimes parr of being a rational agent, because one way to act these questions are the only ones the disciplines address or that the disciplines have all been
rationally is to reason logically to the conclusion.that a given action will achieve one;s goals working toward AI as their ultimate fruition.
and then to act on that conclusion. On the other hand, conect inference is not
all of ratio- Philosophy (428 n.c.-present)
nality, because there are often situations where there is no provably correct thing to
do, yet
something must still be done. There are also ways of acting rationally that cannot
be said ro e Can formal rules be used to draw valid conclusions?
involve inference- For example, recoiling from a hot stove is a reflex action that is usually o How does the mental mind arise from a physical brain?
more successful than a slower action taken after careful deliberation. o Where does knowledge come from?
" If ,h*" i. *lrL,*J6" program might never stop lookine for one
Chapter l. Intloducdon
1.2. The Foundations of Arti{icial Jntelligence
Aristotle (384-322 B.c.) was thc first to forrnulate a preci-se set of laws goveming the
i:al part of the niincl. He developed rt infornral systern of svllogisms fbr proper ."uso,1i,,o the Worlrt (1928) defined an explicit computational
procedure for extractii'rg knowledge from
which in principle allorved one to generaie conclusions mechanically, given initial prernisei' elementary experiences. It was probably the first thcoly of tnind as a cornputatiottal process-
Much later, Ratnorr Lull (d. l-3 l5) had the idea that uscfui reasoning could acrually be The final element irr the phil<lsoptrical picture of the rnind is the cclnncction between
action as well
out by a nlechanical a-rtifect. His ''concept rvheels" are on the cover of this book. ttronras knowledge and action. This question is vital to AI, because intelligence requires
Hobbes ( l-588-1679) ploposed that rea.soning wzrs like numerical compurarion, rhat ,.ure
adc as reasoning. Moreover, onty by understanding how actions arejustified can we understand
and subtract in our silent th<;ughts." The automation of computation itself was already how to build an agent whose actions are
justifiable (or rational). Aristotle argued that actions
under wety; arround 1500, Leonardo da Vinci (1452'1519) designed but did nor build are justified by a logical connection between
goals anci knowledge of the action's outcome
a me-
chanical calculator; recent reconstructions have shown the design to be functional. The
first (the last part ofthis extract also appears on the fi'ont cover ofthis book):
known calculating rnachine was constructed around 1623 by the Gennan scientist Wilhelm '
But how does it happen that thinking is sometimes accompanied by action and sometimes
Schickard (1592-1635), although the Pascaline, built in 1642by Blaise pascal (1623-t664;, not, sometimes by motion, and sometinres not? it looks as if almost the same thing
is mor:e famous. Pascal wrote that'1he arithrnctical machine produces effects which appear happens as in the case ofreasoning and making inferences about unchangiug objects. But
nearcr io thought than all the actions of animals." Gottlried Wilhelm Leibniz (1646-1716) in that case the end is a speculative proposition . .. whereis here the conclusion which
built a mechanical device intended to cary out operations on concepts rather than nr1nlr"rs, results fronr the two premises is an action. . . . I need covering; a cloak is a covering. I
need a cloak. What I need, I have to make; I rteed a cloak. I have to make a cioak- And
but ils scope was rather limited.
the conclusion, the "l have to make a cloak," is an acticn- (Nussbaum, 1978' p. 40)
Now that ',ve have the idea of a set of rules that can describe the formal, rational part
of the mind, the next step is to consider the mind as a physical system. Ren Descartes ln lhe Nicomachean Ethics (Book III. 3, I I l2b), Aristotle further elaborates on this topic,
( I 596- 1650) gave the first clear discussion of the suggesting an algorithm:
distinction between mind and matter and of
the problerns that arise. One problem rvith a purely physical conception of the mind is We deliberate not about encis, but about rneans. For a doctor does not deliberate rvhether
that it
seems to leave little room fbr fiee will: if the mind is governed entirely by physical laws, he shall heal, nor an ol?tor whether he shall persuade, ...
They assume the end and
then :. ifit
seems easily and best produced
it has no more iree will than a rock "cleciding" to fall toward the center of the earth. Althougtr consider how and by what means it is attained, and
. ,
tlrereby; whiie if it is achieved by one rnearrs <lnly they consider irow it will be achieved
a strong advocate of the power of reasoning, Descartes was also a proponent of dualisrn. He
r by this and by what means tlrn will bc achieved, till they come to the first cau.se, . . . and
held that there is a part of the human mind (or soul or spirit) that is outside of natue, exempt
l what is last in the order of analysis scems to be first in the order of becorning. And if we
from physical larvs. Animals, on the other hand, did not possess this dual quality; they couid i,. come on an irnpossibility, we give up the search, e.g. if we need money and this cannot
MATERIA].ISM be treated as machines. An alternative to dualism is materialisrn, u,hich holds that the brain's be got; but if a thing appears possible wc try to do it.
operationaccordingtothelawsof physics constitutesthemind. Freervill issimplytheway .'
I ,
Aristotle's algorithm was implemented 2300 years iaier by Newell and Simon in their GPS
that the perception of available choices appears to the choice process. ,-
program. We would now call it a regression planning system. (See Chapter I l.)
Civen a physical rnind that manipulates knowledge, the next problem is to estabtish the j. Goal-based analysis is useful, but does not say what to do when several actions will
MPrRrCr$t source of knowledge. The ernpiricisrn movement, starting with Francis Bacon's (1561-1626) i,
, achieve the goal, or when no action will achieve it completely. Antoine Arnauld (1612-1694)
1- "
Novum Organum,3 is characterized by a dictum of John Locke (1632-1704): "Nothing is in correctly described a quantitative formula for deciding what action to take in cases like this

the understanding, which rvas not first in thc senses," David Flume's ( l7l l-1i76) A Treaise : ,-.,:, ., (see Chapter l6). John Stuart Mill's (1806-1S73) booklltilitarianism(Mill, 1863) promoted
. .'. i, ..
NOUCNON of Htunan Natttre (Htrme, 1739) proposed what i.s now known as the principle of induction, ii. the idea of rational decision criteria in all spheres of human activity. The more formal theory
that general rules are acquired b-v exposure to repeated associations U"rr"""n ttoi, , l.j, , ,-]'' . of decisions is discussed in .the following section.
Building on the work of Ludwig wittgenstein (t889-1951) and Bertrand Russeil "i"rnrn*.
(tTiz- [i ,'.;;,,,,.
1970), the famous Vienna Circle,led by Rudolf Car-nap (1891-1970), developed
rhedoctrine f;' ,
::i Mathematics (c. 8fi)-present)
of fogical positivisrn. This doctrine holds that all knowledge can be characteiized by logical "
" ' .' ,,,, j
.ocrcAr PoslTrvrsM
c What are the formal rules to draw valid conclusions?
iEI{TENCS theories connected, ultimately, to observation sentences that conespond to sensory inputs.a ,. , ,..,r,,,,,

pfirF|SMAT|oN F,., , , e What can be computed?

'HEOfrt The confirrnation theory of Carnap and Carl Hempel (1905-t997) un..pt"d a unO".r,rd
F., , '';,.i;1., o How do we reason with uncertain information ?
how knorvled-se can be acquired from experience. Carnap's book The l-ogicttl Structure of
:-: irr
Philosophers staked out most of the important ideas of AI, but the leap to a formal science re-
' An updale of Aristotle's Organon, or instntment of thought. i' r : i"
' ''::.,r quired a level of mathematical formalization in three fundamental areas: logic, computation,
{ In this picture, all meaningful statements can be verified or falsified either by analyzing rhe rnearung of r}re '"' 'i:: '''
[, and probability.
.:::1: ::dl::'1.liLc-"i,::"diients,.,
Because this rutes our mosr of meraphysics, ffiir::#iilli:,;;';;, .FU,'l.;,..j;':
r; The idea of formal Inoic nan he fraced har-k to the nhilnsnnherq of encienl Greece (see
Chapter l. Introductioq r'2' The Foundations of Artificial Intelligence
(I8 l5- I 864), who worked our the details of propositionar, or Boolean, rather than
logic (Boole, l g47;. the overall problem of generating intelligent behavior into tractable subproblems
In 1879, Gottlob Frege (1848-1925) extended Boole's logic
to include objects and relations, intractable ones.
creating the first-order logic that is used today as
the most basic knowledge representation How can one recognize an intractable problem? The theory of l{P'completeness, pio-
system's Alfred Tkski (1902-1983) introduced a theory NPo{ftrIENESS
(1972), provides a method. Cook and Karp
of reference rhar shows how io neered by Steven Cook (1971) and Richard Karp
relate the objects in a logic to objects in the real world. and reasoning prob-
The next step was to determine the showed the existence of large classes of canonical combinatorial search
limits of what could be done with logic and computation. Any problem class to which the class of NP-complete problems
lems that are NP-complere.
The first nontrivial algorithm is thought to be Euclid's (Although it has not been proved that NP-complete
algorithm for computing greai- can be reduced is likely to be inuactable.
est common denominators. The study of algorithms
as objects in themselves goes back
b problems are necessarily intractable, most theoreticians believe it.) These results contrast
al-Khowarazmi, a Persian mathematician of the
9th century, whose writings also introduced with the optimism with which tlre popular press greeted the first computers-"Electronic
Arabic numerals and algebra to Europe. Boole and others
discussed algorithms for logicnl Super-Brains" thal were "Faster than Einstein!" Despite the increasing speed of computers,
deduction, and, by the late lgth century, efforts were
under way to formalize general math- careful use of resources will characterize intelligent systems. Put crudely, the world is an
ematical reasoning as logical deduction. In 1900,
David Hilbert (1g62-lg4i)presenred a extremely large problem instance! In recent years, AI has helped explain why some instances
Iist of 23 problems that he correctly predicted would
occupy mathematicians for the bulk of of NP-complete problems are hard, yet others are easy (Cheeseman et al.,1991).
the century' The final problem asks whether there
is an iloritrrm for deciding tlie truth of Besides logic and computation, the third great contribution of mathematics to AI is
any logical proposition involving the natural numbers-the
famous Entscheid.ungsproblem, PMBASIUTY the theory of probability. The Italian Gerolamo Cardano (1501-1576) first framed the idea
or decision problem. Essentially, F{ilbert was asking whether
there were fundamental limits of probability, describing it in terms of the possible outcomes of gambling events. Prob-
to the power of effective proof procedures. In 1930,
Kurt G<tdel (1g06-lg7g) showed that abiliry quickly became an invaluable part of all the quantitative sciences, helping to deal
there exists an effective procedure to prove any
true statement in the firsforder logic of Frege with uncertain measurements and incomplete theories. Pierre Fermat (1601-1665), Blaise
and Russell, but that first-order logic could not capture
the principle of mathematical induc- Pascal (1623-1662), James Bernoulli (1654-1705), Piene Laplace (1749-1821), and oth-
tion needed to characterize the narural numbers. In 1931,
he showed that real limits do exist. ers advanced the theory and introduced new statistical methods. Thomas Bayes (1702-1761)
His incompleteness theorem showed that in any language
expressive enough to describe the proposed a rule for updating probabilities in the light of new evidence. Bayes' rule and the re-
properties of the narural numbers, there are true
statements that are undecidable in the sense sulting field called Bayesian analysis form the basis of most modern approaches to uncertain
that their truth cannor be established by any algorithm.
reasoning in AI systems.
Tbis fundamental result can also be interpreted as showing
that there are some functions
on the integers that cannot be represented by an algorithm-tha-t
is, they cannot be computed. Economics (lTTGpresent)
This motivated AIan Turing (lgl2-1g54) to ny to chuacteiznexacrly
capable of being computed. This notion is actually
which firnctions are r How shoirld we make decisions so as to maximize payoff?
slightly problematic, because the notion
of a computation or effective procedure really cannot r How should we do this when others may not go along?
be given a formal definition. Horvever,
the Church-Turing thesis, which states that the
Turing machine (turing, 1936) is
c How should we do this when the payoff may be far in the future?
computing any computable function, is generally
capable of
accepted as providing a sufficient definition. The science of economics got its start in 1776, when Scottish philosopher Adam Smith
Tlring also showed that there were some functions that (1723-1790) published An Inquiry into the Nature and Causes of the Weabh of Nations.
no Turing machine can compute. For
example, no machine can tell in general whether given
a program will rerurn an answer on a While the ancient Greeks and others had made contributions to economic thought, Smith was
given input or run forever.
the first to trcat it as a science, using the idea that economies can be thought of as consist-
Although undecidability and noncomputability are important ing of individual agents maximizing their own economic well-being. Most people think of
to an understanding of
computation, the notion of intractability has had
a much greater impacr. Roughly speak_ economics as being about money, but economists will say that they are really studying how
ing, a problem is called intractable if the time required people make choiies that lead to preferred outcomes. The mathematical treatment of "pre-
to solve instances of the problem
grows exponentially with the size of the instances.
The distinction between polynomial and ferred outcomes" or utility was first formalized by L6on Walras (pronounced "Valrasse")
exponential growrh in complexity was first emphasized (1834-1910) and was improved by Frank Ramsey (1931) and laterby John von Neumann and
in the mid- I 960s (cobham, 1964; Ed-
monds, 1965)' It is important because exponential gowth
means that even moderately large Oskar Morgenstern in thlir book The Theory of Games and Economic Behavior (1944).
instances cannot be solved in any reasonable time.
Therefore, one should strive to divide
0ECtstoN rHoflY Decision theory, which combines probability theory with utility theory, provides a for-
mal and complete framework for decisions (economic or otherwise) made under uncertainty-
5 Frege's proposd notation for first-order logic never that is, in cases where probabilistic descriptions appropriately capture the decision-maker's
became popular, for reasons that are apparent
from the example on the front cover. immediately
environment. This is suitable for "large" economies where each agent need pay no attention
ry ll
l'2" The Foundations of Aftifilul In'dlg"l""
l0 Chapter l. Introductis, Section

to the actions of other agents as individuals. For'"small" economies, the situation is mucfi
more like a game: the actions of one player can significantly affect the utility of anothq
(either positively or negatively). Von Neumann and Morgenstern's development of gary
GAME THEOFY theory (see also Luce and Raiffa, 1957) included the surprising result that, for some gameE Axon from another cell

a rational agent should act in a random fashion. or at least in a way that appears randorn h
the adversaries.
For the most part, economists did not address the third question listed above, namely.
how make rational decisions when payoffs from actions are not immediate but instead re.
sult from several actions taken in sequence. This topic was pursued in the field ofoperatiol5
RESEARCH research, which emerged in World War II from efforts in Britain to optimize radar installa.
tions, and later found civilian applications in complex management decisions. The work of
Richard Bellman (1957) formalized a class of sequenti4l decision problems called Mankov
decision processes, which we study in Chapters 17 and21. Cell bodY or Soma
Work in economics and operations research has contributed much to our notion of ra.
tional agents, yet for many years AI research developed along entirely separate paths. One ffifanervecellorneuron.Eachneuronconsistsofacellbody,
t ttti u:i""n'-1i
reason was the apparent complexity of rnaking rational decisions. Herbert Simon (191G or soma, that contains """""tr",e nber caJred ","::':i" R:::g
rhe'axon. rhe axon tr il:lTJ;:
stretches out ror
2001), the pioneering AI researcher, won the Nobel prize in economics in 1978 for his early n:nX,,t5::::fili i,l"; ,i"ei"
tt'u-n slcale in this diagram indicates' Tlpically they are I
a long distance, much longJr I meter' A neuron
work showing that models based on satisficing-making decisions that are "good enough," d;;;;ile'tt" cell bodv)' but can reach up to

rather than laboriously calculating an optimal decision-gave a better description of actual

:#ii";'ffi;".", ir'"
called synapses' Signals are
makes connections with l0 to 100,000 other neurons atjunctions The signals
human behavior (Simon, 1947). In the 1990s, there has been a resurgence of interest in propagated from neuron ;;";
by a complicated
decision-theoretic techniques for agent systems (Wellman, 1995). controlbrainactivity,nth",ho,tterm,andalsoenablelong-termctrangesirrtheposition
form the basis for learning
These mechanisms are thoughito
and connectivity of neurons. lerebral cortex' the outer layer of
processing goes on in the
Neuroscience (1861-present) in the brain. Most information about 0'5 mm in
unit appears to be a column of tissue
rhe brain. The basic organizational A column
nt which is about 4 mm in humans'
o How do brains process information? diameter, extending th" filfiil;f "u't"*'
contains about 20,000 neurons'
NEUFOSCINCE Neuroscience is the study of the nervous system, particularly the brain. The exact way in
which the brain enables thought is one of the great mysteries of science. It has been appre- ,
ciated for thousands of years that the brain is somehow involved in thought, because of the ,
evidence that strong blows to the head can lead to mental incapacitation. It has also long been ,,- neuronal structures-8
known that human brains are somehow different; in about 335 s.c. Aristotle wrote, "Of all Wenowhavesomedataonthemappingbetweenareasofthebrainandthepartsofthe
6 are able to
the animals, man has the largest brain in proportion to his size." Stilt, it was not until tho : receive sensory input' Such mappings
body that they control or from which they
middle of the l8th century that the brain was widely re*ogniznd as the seat of consciousness' : changeradicallyou"rtt"-tou'seofafewweeks'andsomeanimals.seemtohavemultiple
gland. take over functions when
how other areas can
Before then, candidate locations included the heart, the spleen, and the pineal .:.
maps. Moreover, we ao noi fully understand
on how an individual memory
is stored'
Paul Broca's (1824-1880) study of aphasia (speech deficit) in brain-damaged patients ' one area is damaged. There is almost no theory
began in 1929 with the invention
by Hans
in 186l reinvigorated the field and persuaded the medical establishment of the existence or . ' The measurement of intact brain acrivity magnetic
localized areas of the brain responsible for specific cognitive functions. [n particular, he; relcent development of functional
Berger of the electroencephalograph (EEG). The
showed that speech production was localized to a portion of the left hemisphere now calle0 " resonanceimaging(fMRI)(ogawaetat.'|990)isgivingneuroscientistsunprecedentedly
interesting ways
that correspond in
Broca's area.7 By that time, it was known that the brain consisted of nerve cells or neuroll' i detailed images of brain activity, enabling measurements of
but it was not until 1873 that Camillo Golgi (1843-1926) developed a staining techniquc
i.- by advances in single-cell recording
ro ongoing cognitive processes. These are augmented
allowing the observation of individual neurons in the brain (see Figure 1.2). This techruquc ii' in
out primarily in a continuous medium
G"ltt that the brain's functions were carried
t" his b"lief
r"'g"l*r;tr " doctrine'" The two shared the Nobel
:-t"':::"*t"': :'^:."T"'Y::T::r:1T,:l'-:lT-1',1::n"*
1fa1iv.1rr Ft which neurons were embedded, whercas Cajal propoundea
"ri*"d iit-Jn'u'onut
Chapter L Introducti IJ
The Foundations of Artificial lntelligence

Human Brain thought processes. The careful controls went a long way
I Co*pu,urion.ol ,nio - I CPU, 108 gates dve task while introspecting on their
of the data made it unlikely
Storage unirs
t0Io bits RAM
10ll neurons t*arO making psychology a science, but the subjective nature
I her own theories' Biologists studying
10rr neurons *rat an experimenter would ever disconfirm his or
t0rr bits disk introspective data and developed an objective
Cycle time 1014 synapses animat behavior, on the other hand, lacked
l0-9 sec by H. S. Jennings (1906) in his influential work Behavior of the
Bandwidth 10-3 sec ,n"rfroAotogy, as described
10ro bits/sec to humans, the behaviorism movement, led by
Memory updates/sec 10r4 bits/sec Lower organisns- Applying this viewpoint
i t0e 8E$UORlSl'l
jonn Wutson (1878-1958), rejected cny theory involving mental processes on the grounds
trgure l.J A crude cr)moa f that introspection could not provide reliable evidence.
Behaviorists insisted on studying only
(circa 2Cf'3) and brains. r!.
since the first edition of rh
;;;il; ;ources
ffi ;";iilfr:ili:"; :ased byavailable to computers
at least a factor of l0
i objective measures of the percepts (or stimulu.s) given to an animal and its resulting actions
to do so a gain this decade.
(oi response). Mental constructs such as knowledge, beliefs, goals, and reasoning steps were
numbers have The brain,s jismissed as unscientific "folk psychology." Behaviorism discovered a lot about rats and pi-
";.;,;;*','"T:?:llriffio,T::. I

geons, but had less success at. understanding humans. Nevertheless, it exerted
a strong hold

are sd'a on psychology (especially in the united states) from about 1920 to 1960.
ffil:i.":#:1"fi'.:::,[iTJililT:,f rong wav rrom understanding
how any
The view ofthe brain as an information-processing device, which is a principal charac-
The truly amazing conclusion
is that c co.llection teristic of cognitive psychology, can be traced back at least to the works of William Jamesl0
acilon, and consciouJnesJ or, of simple ceils can lead
in other words, that brains ,ou,r"-ri*^(Searle, b thought, ffiff*' (1g42-1910). Helmholtz also insisted that perception involved a form of unconscious log-
1992). The
rhere is *',,ny,,i""r ical inference. The cognitive viewpoint was largely eclipsed by behaviorism in the United
il.t#irlJi"tr,}::iffil,T:,f,i}m:-th* rearm in which minds
Srates, but at Cambridge's Applied Psychology Unit, directed by Frederic Bartlett
1969), cognitive modeling was able to flourish. The Nature of Explanation, by Bartlett's
different rasks and have different
.t**"ti1ii:11fl:1il:,:TJlfitrx"*tuite properries
sfudent and successor Kenneth Craik (1943), forcefully reestablished the legitimacy of such
are sares in the cpU or a tvpicar's;
; ;ff*:',:il::.Ti ?ffry:iln';:l m** "mental" terms as beliefs and goals, arguing that they are just as scientific as, say' using
pressure and temperature to talk about gases, despite their being made of molecules that have
2020 or course,,i,t
tii e can be inre; neither. Craik specified the three key steps of a knowledge-based agent: (l) the stimulus must
difference i n, *i t"ii n g speed
;:; and i n p-un. "" jffi:ilffijt;
TiH,"Jfr1Ifi:J; be translated into an internal representation, (2) the representation is manipulated by cogni-
in a nanosecond, whereas neurons :

are millions of times slower.

tive processes to derive new internal representations, and (3) these are in turn re-translated
for this, however,
because at the neurons back into action. He clearly explained why this was a good design for an agent:
most current computers have
and synapses
onry one or at ,:r,.u f"nv
-" r";;^#t",;ffinil ;*:#
CpUs. Thus,-even though a If the organism carries a "small-scale model" of extemal realiry and of its own possible
':rrT:t::::esfaster in raw switch*,g ,;ra,
the brain
t00,un fimesfaster actions within its head, it is able to try out various alternatives, conclude which is the best
"ri-ri-i,^r ' of them, react to future situations before they arise, utilize the knowledge of past evenls
PsychologSr (l g79_present) in dealing with the present and future, and in every way to react in a much fuller, safer,
and more competent manner to the emergencies which face it. (Crailq 1943)
o How do humans and animals
think and act? After Craik's death in a bicycle accident in 1945, his work was continued by Don-
The origins of scientific psychorogy
are usually rraced ,o the woik of the
ald Broadbent, whose Vnk Perception and Communication (1958) included some of the
cisr Hermann von Hermhortz German physi- fust information-processing models of psychological phenomena. Meanwhile, in the United
' as2i-igii"-o n,, srudenr w,rherrn wundt (rs32-rg20).
Helmhortz applied the scientifc States, the development of computer modeling led to the creation of the field of cognitive
of Physiorogicat optics is
method to n" ,ruay of human uirion, and his Handbook
even no* a"r".ilJ as ..1!e, single most
@GMnvE scls
science. The field can be said to have started at a workshop in September 1956 at MIT. (We
physics and physiology.of irnportant trearise on the shall see that this is just two months after the conference at which AI itself was "born.") At
human uirionliiri*ur, a,
1993,p. rsi l" i;s, wunat
first laboratory of experimental opened rhe the workshop, George Miller presentedThe Magic Number Seven, Noam Chomsky presented
psychor"gy'"lrrr" university
careful ly controlled experiments of r*ipzrg.wundt insisted on
Three Models of Language, and Allen Newell and Herbert Simon presented The Logic The'
i n wfrictr"tris *
wffcn fus workers would perform a perceptuar
;-r.---.- _--'vr!us'L'
- ruoore s Law says that the
or associa- ory Machine. These three influential papers showed how computer models could be used to
number of bansistors per
capacity doubles roughly vwr srQu?re inch doubles every I to 1.5 years. Human
2 to n,,rlri- y.*.. brain l0 William James was the brother of novelist Henry James. It is said that Henry wrote fiction as if it were
psychology and William wrote psychology as if it were fiction.
I llci l vurluu!.
Chapter l. Int
address the psychology of
also owes a debt to the software sit'le of computer science, which has supplied
memory, language, a
respectivelv' Ir is
common view among psvchorogist' programs (and
psvchorogists rhar ..a rh";.;;;;;,;"*d'rn*il;'ji:"'j':'jl_-tt
.""'ll,:1;::ji'"*l: ''
t*s a comprrel
operating systems, programming languages' and tools needed to write modern
gram" (Anderson' 1980), that where the debt has been repaid: work in AI has pio-
is, it shourd a.r..io. a detaired info.-utio n-processing n', o'u'"rt aiout ttreml. But this is one area
nism whereby some cognitive function mainstream computer science, including
I"!r"O many ideas that have made their way back to
*rni-J. implemented.
ii*" ,t.in!, inreractive interpreters, personal computers with windows and mice, rapid
Computer engineering (l 940_present) data type, automatic storage managen'lent, and key
velopment environments, the linked list
o How can we build of symbolic, functional, dynamic, and object-oriented programming.
an efficient computer?
For artificial intelligence to succeed'
we need two things: intetigence
and an anifacr.
Control theory and Cybernetics (1948-present)
computer ha.s been the arrifacr T[
of choice. The mooerniigrui"i!.oonic
vented independentry and
armost simultaneously by scientists
compurer wn.5 e How can artifacts operate under their own control?
wortd war II. The first op.eratior"t;;;;.. in thr
was rhe.t".,ro,n."tutnti"l,"rffi
'i Ktesibios of Alexandria (c. 250 n.c.) built the first self-controlling machine:
a water clock
built in 1940 by Aran Turing's ream
for a singre purpose: d";;;;;"g
llil,lifli; with a regulator that kept the flow of water running through it at a constant,
predictable pace.
German messages.
1943' the same group developed
the Colossus, a powerfur geirerai-purpose I This invention changed the definition of what an artifact could do. Previously, only
on vacuum tubes-12 machine bas* in environment. Other examples
first operationar programmable things could modify their behavior in response to changes the
tion of Konrad Zuse in.The computer was the Z_3, the
of self-regulating feedback control systems include the steam engine governor, created by
Germany in 1941. ius. also invented inven
first high-tevel programming ranguage, prankarkiil, floatin and drt
James Watt (1736-1819), and the thermostat, invented by Cornelis Drebbel
The first ,,rrrrr.t-!^o)l:-"tTb'.t:
was assembred by John atunaso-tr
at Iowa state univer'sity' Atanasoff=
Jjo u, ,tua"nt Crifford ,",,r'#ffiffi
*r"*:l received rittr" ,ufion or recognirion; it n:iir:, who also invented the submarine. The mathematical theory of stable feedback systems was
developed in the l9th century
the ENIAC. deveroped as part
of a secret ,nifiory project u, ,r,"'tjn',u"rsity wa
COI{IROI IHEOFY The central figure in the creation of what is now called control theory was Norbert
of pennsyrvania
ana rohn Eckert' p.ou"J. be rhe Wiener (1894-1964). Wiener was a brilliant mathematician who workpd with Bertrand Rus-
iJ"1i:11;'Jj;::"',:j;#:T:1t 'r'u'
most i"n*;;; sell; among others, before developing an interest in biological and mechanical control systems
In the harf-century since then, each and their connection to cognition. Like Craik (who also used control systems as psycholog-
generation of computer bardware
increase in speed and capacity has brought ar
and a decreal in price. p.rro.,nun".loubles ical models), Wiener and his colleagues Arfuro Rosenblueth and Julian Bigelow challenged
or so' with a decade or two to go every I g montlu
at this rate of increase. After;hat, the behaviorisr orthodoxy (Rosenblueth et aI.,1943). They viewed purposive behavior as
enginee5ing or some other new we wiil need morecula
technology. arising from a regulatory mechanism trying to minimize "error"-the difference between
of course' there were calculating J"u;."s current stare and goal state. In the late i940s, Wiener, along with Warren McCulloch, Walter
before the erectronic computer.
automated machines, dating The earriest
from ttre t ztir century were criscussed Pitts, and John von Neumann, organized a series of conferences that explored the new mathe-
grammabre machine was a on page 6. The firstpro
room devised in rg05 by Joseph matical and computational models of cognition and influenced many other researchers in the
used punched cards to store M";;;r;;""rd (r752_rg34) thar
instructions for the pattern to
be woven. In the mid_ r gth
C.rBEFNENCS behavioral sciences. Wiener's book Cybernetics (1948) became a bestseller and awoke the
charles Babbage a7g2.-187|)o"rign"o cenrury public to the possi-bility of artificially intelligent machines.
t*o machines, neither of which he
"Difference Engine"'which completed. The
appears"on tt e couer ortrris
book, was intenoed to compute Modern control theory, especially the branch known as stochastic optimal control, has
ematical tables for engineering math ffi,ffitr as its goal the design of systems that maximize an objective function over time. This roughly
and scientific projects. It was

Ll ?il ;:::Hffi : J;',::il#,ffi::.:ir i.:

j ::i . ) ;**;
finally buitt and shown to work
A n ary,i ca, En g i ne,,
matches our view of AI: designing systems that behave optimally. Why, then, are AI and con-
trol theory nvo different fields, especially given the close connections among their founders?
jumps and was rhe nrsr arriract
capabre The answer lies in the close coupling between the mathematical techniques that were familiar
Lovelace, daughter of the poet ",;;il,Tilffi:::"rT"o
Lora nyron,
t.t#';1,,:,'"1,1?li# to the participants and the corresponding sets of problems that were encompassed in each
was perhaps lhe worrd,s
programming first programmer. (The
ranguage Ada is named after her.)
wrote programs for the unfinished
world view Calculus and matrix algebra, the tools of control theory, lend themselves to sys-
|yticalEngjneandevenspeculatedthatthern"achine*"il;i;;";il,o"...o,noo,"music. Ana- tems that are describable by fixed sets of continuous variables; furthermore, exact analysis is
typically feasible only for linear systems. AI was founded in part as a way to escape from the
his depictions of whimsicar and absurdry compricated limitations of the mathematics of control theory in the 1950s. The tools of logical inference
l2 In the postwar
period' Turing-wanted
and computation allowed AI researchers to consider some problems such as language, vision,
to use these computers
chess programs (Turing et tr. for AI research-for
exampre, one of the fint and planning, that fell completely outside the confiol theorist's purview-
' l ls:1. H;, .r"* *". ur""ked by the British governmenr.
The fltstory oI Al-tlnclal lntelllgence
n inguistics (19S7_present)
o How does language
relate to thought? Two graduate students in the Princeton mathematics department, Marvin Minsky and
computer in 195 l. The SNeRc, as it was called,
In 1957, B. F. Skinner published Dean Edmonds, built the first neural network
Verbal Behavior. Th. used 3000 vacuum tubes and a surplus automatic
pilot mechanism from a B-24 bomber to
,;r beh avi oris t approach to
;:ff r,o gu ug. r;;j,il;:J;il:Tlff:",:,^:.:raled q simulate a network of 40 neurons. Minsky's Ph.D. committee
was skeptical about whether
,"n",";;;li;;*#:ff :."f.yuoof u.c'a-m.1,-#i,.*no*n;;;;;;;,*d:fi ::l[
this kind of work should be considered mathematics, but von Neumann reportedly said,
will Minsky was later to prove influential theorems showing the
had jusr published The author of the review
*rt *;;;; it isn't now, it be someday."
Jil'on"n:]'lttt: limitations of neural network research.
There were a number of early examples of work that can be characterized as AI, but it
fl'-'::ffi #,'ffi *T""'ff*1*'#'r:;;if1gf:lilil,''1
theory_basea ,v',"",,.
was Alan Turing who first articulated a complete vision of AI in his 1950 article
and Intelligence." Therein, he introduced the Turing test, machine
;#T::'.. moders g;i"e ui11.n" ing Machinery
unlike o'."i"" ir'""ries,
ilil;;;";r",Xij:lT genetic algorithms, and reinforcement learning-
courd in
Jd:#{#-:ffi:nd it *u, rorrJ?ough
thar i1

r,ff m{$8,**.,.,#ll"i},:i:ffi lf
j'#i:":"il:ffiff -,il:Jilt*1"ru;f The birth of artificial intelligence (1956)
processing. The problem
oi unde
anding language soon
f :,T:,ffi princeton was home to another influential figure in AI, John McCarthy. After graduation,
complex than ir seemed turned out to be.onrlilr*ty
in 1957. rnon, McCarthy moved to Dartmouth College, which was to become the official birthplace of the
j;:i:ili#i*""?T'.';11ffift field. McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him
seem obvious' but it was not widely-appre","r"Jr"",,r
,r," I960s. Much of the
*,:*:::li,'Jlff' bring together U.S. researchers interested in automata theory, neural nets, and the study of
gtrre stuav-or
noru ," ;y work in intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956.
I^1":"1-r: was tied to o"i'i""r","dge into a form ,i",1 .ornor,o
There were l0 attendees in all, including Trenchard More from Princeton, Arthur Samuel
.with) informed iy ,"*.*"r,
connecred ln turn to decades '*t:,lt: i:o in ringuisrics, which
of work on the philosopii.,"f war from IBM, and Ray Solomonoff and Oliver Selfridge from MIT.
_ufysis of language.
T[vo researchers from Carnegie Tech,l3 Allen Newell and Herbert Simon, rather stole
the show. Although the others had ideas and in some cases programs for particular appli-
cations such as checkers, Newell and Simon already had a reasoning program, the Logic
Theorist (LT), about which Simon claimed, "We have invented a computer program capable
*'th the background material of thinking non-numerically, and thereby solved the venerable mind-body problem."la Soon
behind us' we are ready
to cover the deveropment after the workshop, the program was able to prove most of the theorerns in Chapter 2 of Rus-
of AI itserf.
The gestation of artificial sell and Whitehead's Principia Mathematica. Russell was reportedly delighted when Simon
intelligence (194!1955)
showed him that the program had come up with a proof for one theorem that was shorter than
The first work that
is now oFnprar,, the one in Principia. The editors of the Journal of Symbolic Logic werc less impressed; they
wa,,erp';..lil;;"di,-,:rii;ffi rejected a paper coauthored by Newell, Simon, and Logic Theorist.
funcrion of neurons in
the brain; u io-ul
;,;:,etyffi :#;3#;:
li:iff The Dartmouth workshop did not lead to any new breakthroughs, but it did introduce
"r or"o"ri,L"^a r"gic due to Russe'
whitehead; and rurins-'s,*"oo
;i ;"-;;* and all the major figures to each other. For the next 20 years, the field would be dominated by
T"v i."i"r#r"r'", *fi ciar neurons these people and their students'and colleagues at MIT, CMU, Stanford, and IBM. Perhaps

: :!'j"::":T[HT:Tr:"**',"?ffi'""" ".:oi; ;il ", .,sn,,

ffi;."n ,o occu'ing the longesrlasting thing to come out of the workshop was an agreement to adopt McCarthy's
neuron was conceived of neighboring neurons.
ofas "factuaily equivateJber The state of a new name for the field: artificial intelligence. Perhaps "computational rationality" would
s ti m ur u s " rhev rfl":|I:fi
il' :ffi ;?ff .
sh owed, ro.
ifi:ffi lHff
have been better. but "AI" has stuck
Looking at the proposal for the Dartmouth workshop (McCarthy et al-, 1955), we can
."fi :
i:'H;1"HT: ff *H"T:; :X*i* 9a."r';;";;", tr'
| {and, or, not, e rc ) see why it was necessary for AI to become a separate field. Why couldn't all the work done

il,'3:?i::ffi #il"":-'"ourar,u*mil#i,"r1rffi.i,11,'ff T:,ilj,'"":i# 13

Now Carnegie Mellon University (CMU).
r.",n i, g,,",nin,"ffi
ff "]:Tiffii"T,:il:"n n.*on,. H;;;; ;., car ed He bian r
la Newell
and Simon also invented a list-processing language, IPL, to write LT. They had no compiler, and
translated it into machine code by hand. To avoid enors, they worked in parallel, calling out binary numben to
each other as they wrote each instruction to make sure they agreed.
in AI have takcrn place,under
--l The History of Artificial irrtclligence l')
rrarne of contror theory,
or operations
.,???;#"11,;tH""il, f*. "o;""ii"J',;;i;. to those .r oi, o.,,,ii'i,"j:l;l'j,ill:f F second-oldest major high-level
language in current use, one yeal younger than FORTRAN'
human,u"",i,.,rii";{lid;Lililf*fl""Til:il::,T:*,..",J*intr:,,rJjTilfi With Lisp, McCarthy had the tool he needed, but access to scarce
and expensive computing

resources was also a serious

problem. ln response, he and others at MIT invented time shar-

:",j:ff ffi"ff T[']iff _,1H:*#r;il*:r"*:.i,,*,*::::i[i,:$ ing. Also in 1958, McCarthy published a paper entitled Programs with Common Sense, in

that wili rtrnction aut,onomousrv
rhe onry rierd
*ii.t, t. described the Advice Taker, a hypothetical program that can be seen as the first
complere AI system. Like the Logic Theorist and Geometry Theorem Prover, McCarthy's
problems. But unlike the
program was designed to use knowledge to search for solutions to
Early en thusiasmr great expecta :, ottrers, it wa-s to embody general knowledge of the world. For example, he showed how some
tions ( I 952_1969) the program to generate a plan to drive to the airport to catch
simple axioms would enable
The earry years of AI in the normal

were full of successes-in a plane. The program was also designed so that could it accept new axioms
a limited wAv f-ri'o^ rL^
and programming
of th tools -.-: of operation, thereby allowing it to achieve competence in new areas without being
wereseenas,hings,n,,.";:,1*{:J:}rqii:,j.,i"I;l*::;ilJ,:",".ilii$. "ourr.
reprogra,runed. The Advice Taker thus embodied the central principles of knowledge repre-
sentation and reasoning: that it is useful to have a formal, explicit representation of the world
,Hy,,:T: frl
by Turing.) AI researche^
ver.,n", J, *,
I1lTffTo ntu"t do x'" ",(See
rer y c re
"",r :
s talli h m;;
;,,re, pre f ened,
26 for ar""g riri X,s
;;;; and of the way an agent's acfions affect the world and to be able to manipulate these repre-
nu, garrrered i sentations with deductive processes. It is remarkable how much of the 1958 paper remains
M"c;;;";;filil;T:'rallv responded bv demonsrrarins
on. "i another.
afrer i relevant even todaY.

u' il'nuna,r" 'r'''r*t,ili "Iu
John :
;, 1958 also marked the year that Marvin Minsky moved to MIT. His initial collabora-
tion with McCarthy did not last, however. McCarthy stressed representation and reasoning
l:ff ::iilff n*#l;r;;:;#;]i#J$i:r]1":1:':"rtri*iHil* in formal logic, whereas Minsky was more interested in getting programs to work and even-
that the order in which it could handle, rr rurned tually developed an anti-logical outlook. In 1963, McCarthy started the AI lab at Stanford.
the or";;;;;;"J;",::iTt-:I,puzzles our
that in which h";;;;;;J'."r:Til"':il$.ffJ:::+s and possibr. u.,ro^ *u, sirnirar to
His plan to use logic to build the ultimate Advice Taker was advanced by J. A. Robinson's
t* 'o."lo:ov
ttr^e "tuatini nrn,,unry,, approach. l::9tt-was probably the firsr pro discovery of the resolution method (a complete theorem-proving algorithm for first-order
- grams as models of
cognition I.o Nr*lr ,i5 of GPS and subsequenr logic; see Chapter 9). Work at Stanford emphasized general-purpose methods lbr logical
"Ti- )li-t:.ttt"
PHYs'cA'lsYMBo'(ilil:,"#,ffi pro-
reasoning. Applications of logic included Cordell Green's question-answering and planning
systems (Green, 1969b) and the Shakey robotics project at'the new Stanford Research Insti-
tute (SRI). The latter project, discussed further in Chapter 25, was the first to demonstrate the
complete integration oflogical reasoning and physical activity.
"'.-T?H;1*l;:Tjtl[ijry''ffi:1T,iJ,T#'.ff i,':f,:n*Xi;ln# Minsky supervised a series of students who chose limited problems that appeared to
srams. Herbert ffi;;;;;: I
c"l";;';l"e5ef:#rt#.["';':t;::,t-rr."a"'o." J
urpv,onr.m require intelligence to solve. These limited domains became known as microworlds. James
Slagle's SRtNr program (1963a) was able to solve closed-form calculus integration problems
typical of first-year college courses. Tom Evans's ANALocY program (1968) solved geomet-
;J#;#r,:::;ffi rJffi ;ffi :?il'Ji;j#d.*T#:"tr"f
;:#::: "#,:;HHfr: :"ji:Ti:^e.:;;;"",ii,.T* to p,ay a berter#l"";;Ty;
ff ,han
ric analogy problems that appear in IQ tests, such as tlre one in Figure 1.4. Daniel Bobrow's
Sruopnr program (1967) solved algebra story problems, such as the following:
If the number of customers Tom gets is twice the square of 20 percent of the number
:',onst,p.".,sion.r'r,r,'r;ill'.fi :.r;;i:*ffi r:*:ily;T.;*trj#;\:#
he used machines that '
of advertisements he runs, and the number of advertisements he runs is 45, what is the
were t'iji ,"r,i"g n"". iiito
on tr," number of customers Tom gets?
and chapter zt oesc.ri?s;;; ", manufacruring prant.
Chapter 6
ilffiiT'plaving' #il, on the rearning techniques
The most famous microworld was the blocks world, which consists of a set of solid blocks
John Mccarthy moved l placed on a tabletop (or more often, a simulation of a tabletop), as shown in Figure 1.5.
tions in one historic
vear: r 958. ?Stguth MIT and rherema{e
three cruciar contribu- A typical task in this world is to reanange the blocks in a certain way, using a robot hand
Lrb M;;;; *"r i, ,."*y , -
that can pick up one block at a time. The blocks world was home to the vision project of
ranguage Lisp' which r^ ,o i.ror. :{I defined me nigr_teuer
tt" aorinr.,t;;;;r..r_ing David Huffman (1971), the vision and constraint-propagation work of David Waltz (1975),
ranguage. Lisp is the
thd learning theory of Patrick Winston (1970), the natural language understanding program
The History of Artificial Intelligence

i-I-l 1960; Widrow,1962), who called his networks adalines, and by Frank Rosenblatt (1962)
with his perceptrons. Rosenblatt proved the penceptnom convergence theorern, showing

that his learning algoritlrm could adjust the connection strengths of a
input data, provided such a match existed. These topics arc covered
perceptron to match any
in Chapter 20.

A dose of realitY (196G1973)

[_--l l-r--rl l-rN I-_^l From the beginning, AI researchers were not shy about making predictions of their coming

lt Il(a)t

Figure 1.4
t/ \l
\--l N/l l/--t
An example problem solved

4 5
successes. The following statement by Herbert Simon in 1957 is often quoted:

It is not my aim to surprise or shock you-but the simplest way I can summarize is to say
that there are now in the world machines that think, that learn and that create. Moreover,
ttreir ability to do these things is going to increase rapidly until-in a visible future-thc
range of problems they can handle will be coextensive with the range to which the human
mind has been aPPlied-
:d bv ;:;;::
Fvanc," ANALocy
by Evans,s program.
Terms such as "visible future" can be interpreted in varjous ways, but Simon also made a
more concrete prediction: that within 10 years a computer would be chess champion, and a
significant mathematical theorem would be proved by machine. These predictions came true
(or approximately true) within 40 years rather than 10. Simon's over-confdence was due
to the promising performance of early AI systems on simple examples. In almost all cases,
however, these early systems turned out to fail miserably when tried out on wider selections
of problems and on more difficult problems.
The first kind of difficulty arose because most eaily programs contained little or no
knowledge of their subject matter; they succeeded by means of simple syntactic manipula-
tions. A typical story occrured in early machine translation efforts, which were generously
funded by the U.S. National Research Council in an attempt to speed up the translation of
Russian scientific papers in the wake of the Sputnik launch in 1957. It was thought ini-
tially that simple syntactic transformations based on the grammars of Russian and English,
and word replacement using an elecffonic dictionary, would suffice to preserve the exact
meanings of sentences. The fact is that translation requires general knowledge of the subject
matter in order to resolve ambiguity and establish the content of the sentence. The famous
re-translation of "the spirit is willing but the flesh is weak" as "the vodka is good but the
meat is rotten" illustrates the difficulties encouqtered. In 1966, a report by an advisory com-
mittee found that "there has been no machine translation of general scientific text, and none
Figure 1.5 A saene from the blocks worl..
is in immediate prospect." All U.S. government funding for academic translation projects
the command' "n"o t ii"it has just completed was canceled. Today, machine translation is an imperfect but widely used tool for technical,
which is tatter than ,n"e one
you are holding and put
"lY.1Y]""grad'.1972) lt in rhe box.,, commercial, government, and Internet documen ts.
The second kind of difficulty was the intractability of many of the problems that AI was
of Terry Winograd (1g72),and attempting to solve. Most of the early AI programs solved problems by trying out different
the planner of Scom Fahlman
Early work building on the (1974).
neural networks oruccuitoctr'ano pitts combinations of steps until the solution was found. This strategy worked initially because
The work of winograd and.cowan arso flourished.. microworlds contained very few objecs and hence very few possible actions and very short
collectivery represenr an individuar
rrq6:J ,i"red how a rarge number
of eremenrs could. solution sequences. Before the theory of computational complexity was developed, it was
*tin u.o**nding increase jn
parallelism' Hebb's learning
methods ";";;;;
*.ir robusrness and , widely thought that "scaling up" to larger problems was simply a matter of faster hardware
by Bernie widrow (widrow
"nnun".d and Hoff and larger memories. The optimism that accompanied the development of resolution theorem
22 Chapter l. Introduqi* The History of Artificial ..\
_-< irl
proving, for example, was soon darnpened when researchers failed to prove theorems inv61;* Forexample,themassspecFummightconuinapeakatTp=lS,correspondingtothemass
tn" I
ing more than a few dozen facts. The fact that a progrcun canfincl a solution in principte of a methYl (CHs) fragment' generated all-posible strucrures consistent fm i
The naive version of the
not mean that the program contains any of the mechanisms needed to for each' comparing this
fnd it in practice.
and then predicted *t,^,
*ur, ,i".o"t *""ra be ob-served i
The illusion of unlinrited cornputational power was not confined to problem-solv1x, formula, decent-sized molerules'
As one J-n"tt, this is intractable for look- ;

MACHINE EVO{.WION programs. Early experiments in machine evolution (now called genetic algorithms) (Frisf with the acual specrrum. -*,ei, that they worked by
berg, I958; Friedberg et al., 1959) were based on the undoubtedly correcr belief thar The D'NpReL researchers
ronrutr"iffiri.or .r',"*iro and found
a ltetone (c=o)
making an appropriate series of small mutations to a machine code program, on" .un g.n,i subgroup
n"ie is used to
ttre following
ate a program with good performance for any particular simple task. The idea, then, was the molecule. For example, '""ogniz'e
try random mutations with a selection process to preserve mutations that seemed useful. fg. (which weighs 28):
spite thousands of hours of CPU time, almost no progress was demonsrrated. Modern genetil
if there are two peaks at
r1 and' so such that
whole molecule);
algorithms use better representations and have shown more success. (a) rr * it
+ 28 (14 is thekass of the
Failure to come to grips with the "combinatorial explosion" was one of the main crin. (b) rr -
28 is a high Peak:
cisms of AI contained in the Lighthill report (Lighthill, 1973), which formed the basis for ttrr (c) xz 28 is a high Peak;
- is high'
decision by the British government to end support for AI research in all but two universities. (d) At least one of 11 and 12
(Oral tradition paints a somewhat different and more colorful picture, with political ambitions then there is a ketone subgrouP
and personal animosities whose description is beside the point.) .Recognizingthatthemoleculecontainsaparticularsubstructurereducesthenumberofpos-
A third difficulty arose because of some fundamental limitations on the basic strucures tno*ou'ly' DnNonnt- was powerful because
sible candiiao't
being used to generate intelligent behavior. For example, Minsky and Papert's book percep.
to efficient
trons (1969) proved that, although perceptrons (a simple form of neural network) could h componentl,("first principles")
is general form in the tspectruri-frediction 197 I )
shown to learn anything they were capable of representing, they could represent very little.
,p".iu'f"#ri'il"ootUoot ...ip"s"i. in"ig"nbaum et al' '
In particular, a two-input percepffon could not be trained to recognize when its two inpirts
were different. Although their results did not apply to more complex, multilayer networks, ThesignificanceofDENoRALwasthatitwasthefirstsuccessfullatowledge-intensivesys.
research funding for neural-net research soon dwindled lo almost nothing. Ironicatly, the new I clean separadon of
back-propagation learning algorithms for multilayer networks that'were to cause an enor- in"orpo.u*'Jt'" n'uin rheme
of *:A;;il;';'t"k"';;;;ch-the
of rules) from the reasoning Pro-
mous resurgence in neural-net research in the late 1980s were actually discovered first in. the knowledge (in the form ii*roto began the Heuristic
witn dis l"sson in n"ii"nbaum and others at
1969 @ryson and Ho, 1969). ^no, *""ii" *n"ii'" o"* rnethodologv^of expert
gramming hoject (Hpf),to,""";U";#;" next major effort was
,o o,t l'""' of human expertise' The
ExpF*rsrsra,rs systems could be applied "' guchanan'-JU'' gA*tO Shortliffe developed
Knowledge-based systenrs: The key to power? (1969-1979) ,
diagnosis. Feigenbaum' perform
rhe area of medical MYcrN was able to
urooo int contained two
The picture of problem solving that had arisen during the first decade of AI research
*a, of
MycrN to diagnose "itsoJ;h-;;450;;,
*d .;#;;ably
It also
better ttran juJor doctors'
a general-purpose search mechanism trying to string together elementary reasoning
steps t0 '
as well; r"r" experts,pr*o*or. unlike th" DEN;;;tl"''
no general theoretical
rirst, from
find complete solutions. Such approaches have been called weak methods, because, although major differences from had to be acquired
could ue oeouceo' They

general, they do not scale up to large or difficult problem instances. The alternative

model existed from which th" M;;;;es other experts'

to weal :- acquired them from textbooks'
methods is to use more powerful, domain-specific knowledge that allows larger reasoning , exrensive interviewing or
"*p"*,"*;;;;; the rules naa to renect
ttre unceftainty associated with
steps and can more easily handle typicatly occurring cases in narrow areas ofexpertise. One

:. : and direct experience of cases. secona,

called certainty factors
a calculus of unc"'tuinty
might say that to solve a hard problem, you have to almost know the answer already. medical knowtedge. MvcrN ,;;;;;; *itt' ho* tltctors assessed the
(see chapter l3), which seemed ("ffi;;;;ni*"'
The DeNoReL proga$ @uchanan et al.,1969) was an early example of this approach.
It was developed at Stanford, where Ed Feigenbaum (a former student of Herbert Simon),
Bruce Buchanan (a philosopher tumed computer scientist), and Joshua lrderberg (a Nobel ,

caused some
laureate geneticist) teamed up to solve the problem of inferring molecular structure from the t_ on syntactic analysis
its dependence
information provided by a mass spectrometer. The input to the program consists of the ele- had engenJered a good A"uf of work' It was able to
in the early mact'in" o*trution
mentary formula of the molecule (e.g., c6H13No2) and the mass spectrum giving the masses of the same problems u, was mainly because it
t rt2nd nronoun ,"f"r"n""''but this
Chapter l. .*I lrc. r.!i.r^rv
I IJ'-' r of Anificial lntelligettce

designed specifically for one area-the blocks world. Several researchers, includins Erro"" of neural networks
The return
Charniak, a fellow graduate student of Winograd,s at MII, suggesred rhat .oUurt iungiur: of nural-networks in the late I
largely abandoned the field
r,}'^'rsh computer science had (1e82) used terh-
understrnding would require general knowledge about the world and a general rn"tf,oC
fi ^ -*tr. n"rtoli'ertv';;;il ^':n:'111:'o
using that knowledge. li,^" continued
'" "tr'"' I

At.Yale, the linguist-turned-Al-researcher Roger Schank emphasized rhis point, claii. l,o*,ffi il'.ur,"""t'-''"'o""""rf "'r';l*ltm:fru:i"T:ff fil:""""1"1
worrcs' readng
cotlect:*-::1T::"".0 of memory' As we
ing, 'There is no such thing as syntax," whicb upset a lot of linguists, but did serve to stalu dre study of neurafnet mooers
Geoff different
useful discussion. schank and his students built a series of programs (Schank and AbelsorL Rumelhan and mid-1980s when. at leasr' four
1977; Wilensky, 1978; Schank and Riesbeck, 1981; Dyel 1983) thar all had &e task of underl dir"u* in Chapter 20' tn: t"''.tiil;;;ne nrrt found in 1969 bv Brvson
--.',,os uigorlrnr
the back'propagatton
reinvented ''.al"'1"-:-t:";:-" ;^ science and Psychol-
standing natu'al language. The emphasis, however, was less on language per se and more
the problems of representing and reasoning with the knowledge required for language under- il::fr:ilffi;;; 1'*:::,#:l :ffi:Ti:;iJ:l'J: ::iilJiz)," ;;,,*"' pii,iiu.,ea

and the widesPread

l?];lll* ;1 ; T "Y::l:llTl;|i::| :: ::;[flf;#:1?';,*"
standing- The problems included representing stereotypical situations (Cullingford, lggl), ^ov
i - bv so me as d i
describing human memory organization (Rieger, 1976; Kolodnel l9g3), and understanding
,".,"ffi,:,ff *1"::;'.::;i!i1..,ySffiI[i;l,it"ffi
plans and goals (Wilensky,
The widespread growth of applications lo real-world problems caused a concunenr in-

crease in the demands for workable knowledge representation schemes. A large number rogicist approach "r i"rr"n." Deaconis book rhe svmbotic

;:n:6;ii;*g;p1611**nf**1'm j'*:'^l;i,T.',:J':lllill:
of different representation and reasoning languages were developed. Some were based on,
logic-for example. the holog language became popular in Europe, and rhe PLANNER fam-
view is'lha'l
:T$Tfi:i:Ji:::il'iJllil:ffi ;;;;;;;;'*"'"1' lu:-TJ"u""n'
ily in the United States. Others, following Minsky's idea of frames (1975), adopred a moft
structured approach, assembling facts about particular object and event types and arranging
the types into a large taxonomic hierarchy analogous to a biological taxonomy. connectionistandsymbolicapproachesarecomplementary,notcompetlng'

AI becomes a science (1987-present'1

AI becomes an industry (1980-present) Recent
l1:i:lTJ:#*"T:"1"Jfi1#i*t:rffi}T'J1",*i: ;'$"':
evidence rather
The first successful commercial expen system. R I , began operation at the Digital Equipment il*'il:i:::[";".J';ill ffi o,ou''r*'"'" or rard experimentarthan toy examples'
to real-world applications
Corporation (McDermon, 1982)_ The program helped configure orden for new compubr than on intuition, anO to tt'o'" t"r"u^nl" of existing fields like control
systems; by 1986, it was saving the company an estimated $40 million a year. By l9gg, p* the limiutions
AI was founded tn (1998) Put lt'
DEC'S AI group had 40 expert systems deployed, with more on the way. Du pont had 100 uut no* it " ';*;;;st
tt"""'o'ut^"t"i **t
u"tot As David McAllester
theory and statistics,
tn use and 500 in development, saving an estimated $10 million a year. Nearly every major computation'
that new forms' of symbolic
In the early period of Al rt seemed Plausible
U.S- corporation had its own AI group and was either using or investigaiing expert systems.
In l98l, the Japanese announced the ,'Fifth Generation" project, a l0-year plan to build
intelligent computers running Prolog. In response the united states formed the Microelec'
:ir.mffifi "Jilrlt:;'F'[*";:.ti"T"q:friii*:*T"lye:Jil':'
i' i"tg
Th:::^: t*"'gt:11-1":
science. This isolationism
tronics and Computer Technology Corporation (MCC) as a research consortium designed t0 machine leaming sbo"ro i" """""try
assure national competitiveness. ln both cases, AI was part of a broad effort, including chip
"", H:m:"lnn$it"T?il;HTff
design and human-interface research. However, the AI comDonents of MCC and the Fifth :H"i:i"irT#ll':l fffi;:iri'il; shourd not be isorated
Generation projects never met their ambirious goals. In Britain, the Alvey report reinshted from formal methods and static analysls'
firmtt method' To be ac-
the funding that was cut by rhe Lighthill Al has finally come
report.15 In terms of methodologv, lld:t llj,1"Yufic and 0le resulB must
, Overall, the AI industry boomed from a few miltion dollars in l9g0 to bjllions of dollars cepted, hypotheses m",, o"
tJ ngo'out
"tnpi'ta "*periments'
in 1988. Soon after that came a period called the .AI Winter," in which many companies
u':* ':: ffi5"lffi::i-nonant.
suffered as they failed to deliver on extravagaflt promises. s."" h"* H"i":',:lli$n:'-:liili
Erounded in malhemalical ngor
---over rn srr A shift oward neahess
^--.^""r,". ^*
ass*' di.,"p'"d bv
1s To save embarrassment,
;rosrams, and then
"n" :*T:iilJ;i};-;"1",*,111:"ff:li; ['^r;;bili ;';iiib"
implies that the field has reached a le\
a new field called IKBS (Inlelligenr Knowledge-Based sysrems) was invented beca'si
fie State of the An

agents (1995-present)
The emergence of intelligent

perhaps encouraged by the proSress in solving the subproblems of AI, researchers have also
,,rn"a ,o t*t "t
the "whole agent" problem again. The work of Allen Newell, John Laird,
(Newell,l99o"l-aird et al., 1987) is the best-known example
and paul Rosenbloom on soAR
movement aims to undelstand the
of a complete agent architectufe. The so-called situated
environments with continuous sensory inputs One
workings of agents embedded in rsal
of the most imponant environments for intelligent agents is the Intemet. AI systems have
become so comnon itr web-based applicat,ons that the'1bol'suffix
has entered everyday

46*p****m*ffi language. Moreover, Al technologies underlie many

recommender systems, altd Web site construction .systems.
Ioternet tools' such as search engines,

Besides the tirst edition of this text (Russell and Norvig, 1995), other recent texts have
also adopted the agenl persPective (Poole er al., 1998; Nilsson, 1998). One consequence of
trying ro build complete agents is the realization that the previously isolated subfields of AI
might need to be reorganized somewhat when their results are to be tied together. In Particular'

*ms$*t,.g.r*gffi it is now widely appreciated that sensory systems (vision, sonar, speech recognition, etc.)
cannot deliver perfectJy reliable information about the environment. Hence, reasoning and
planning systems must be able to handle uncertainty. A second major consequence of the
agent perspective is that AI has been drawn into much closer contact with other fields, sucb
as conuol theorv and economics. that also deal with asents.


[#$:',t*$*.l;$#;.ff..1''t"ffi ,J:rin'.d
What can AI do today? A concise answer is difficult, because there are so many activities in
so many subfields. Here we sample a few applications; others appear tbougbout the book.
Autonomous planning and scheduling: A hundred million miles from Eanh, NASAs
Remote Agent program became the first on-board autonomous planning program to contlol

the scheduling of operations for a spacecraft (Jonsson er al., 2000). Remote Agent generated
plans from highJevel goals specified from the ground, and it monitored the operation of the
spacecraft as the plans were executed-ietecting, diagnosing, and recovering from problems
as they occurred.
Game playing: IBM's Deep Blue became the nrst computer program to defeat the

6akt..*:it#;,xffi*,:",*[],fi T*fi rriil,r;:l*e::1",::i world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2-5 in
an exhibition match (Goodman and Kerne,1997). Kasparov said that he felt a "new kind of
intefligence" across the board from him. Newsweek mapazine described the match as'The

*ru'gg*****ng'**Nffii I
brain's last stand." The value of IBM's stock increased Uy $ t S Oitlion.
' Autonomous control: The ALVINN computer visioo svstem was trained to steer a car
to keep it following a lane. It was placed in CMU's NnvLle computer-controlled minivan
and used to navigate across
the Uniied States-for 2850 miles ir was in conftol ofsteering the
vehicle 987o of the time. A human
took over the other 27o. mostly at exit ramDs. NAVLAB has
ii video cameras thal transmit
road images to ALVINN, which then compurcs the best direction
to steer, based on experience
{.. lrom previous training runs.
Lltaptcf l. tntrof trctir SummarY
Diagnosis: Medical diagno-sis
pfogranls based on probabilistic
to perform at the lever
urun.^-pr.t ff;;;;;;';'r.verar
analysis n"""
areas of medicin
0..;l r Philosophers (going back to 400 B.c.) made AI conceivable by consictering the ideas
that the mind is in some ways like a machine, that it operates on knowledge encoded in
F:r:T:i"'ffi1,1:*:l*i,:*';u#'-r"l"p.,h"i;s;;;;J.lTffr ill:1i1i some internal language, and that thought can be used to choose what actions to take.
for an explanation oi ttre suggest tr" ,rr *.l"*Jlt
diagnosis. rr," ,,,,u.'"1T1tI9** o Mathematicians provided the tools to manipulate statements of logical certainty as well
d.ecision and explains ootn" out the major factors innu.n"i,is'll
the subtre inreraction
ally, the orrlteveral of the symptoms in this as uncertain, probabilistic statements. They also set the groundwork for understanding
,r,. program. .^..'u"r.i,l,l
"*p"n "gr""r ,
computation and reasoning about algorithms.
Logistics Franning: nu.ini*,.-eersian
Gurf crisis of r99r, u.s.
forces depr6yga
c Economists formalized the problem of making decisions that maximize the expected
^ outcome to the decision-maker.
il::ff '*tiT;r'g:r*f *ri:i*j##*xfli*i,'jr::*:;r";:l.r o Psychologists adopted the idea that humans and animals can be considered information-
illll; I li, ror
Xii".l I
],Tl#,'.",f ;l*
t s rarri
ff :;;n g il;;, Jl ;: ;1:? processing machines. Linguists showed that language use fits into this model.
senemred in hours that wourd have
,"ken ,";-,llffTr;-J::,ffi::n:f;::; l*,,;.*,
o Computer engineers provided the artifacts that make AI applications possible. AI pro-

llftXl il# iru;.?,trf ; ;il il; thi s e sin gr

grams tend to be large, and they could not work without the great advances in speed and
memory that the computer industry has provided.
"ooi;;;u";',ore rhan paid back i
Many surgeons now use o Control theory deals with designing devices that act optimally on the basis of feedback
_ , |:pri*: robor I

from the environment. Initially, the mathematical tools of control theory were quite
;xr;;?:'"T1,fl::"#,::::j;ri different from AI, but the fields are coming closer together.
hip replacem"nt es robotic control to guide
tr," inr"nion oil l' o The history of AI has had cycles of success, misplaced optimism, and resulting cutbacks
I-anguage understanding in enthusiasm and funding. There have also been cycles of introducing new creative
and probrem sorving: pnovsRs
(Littman er al., rggg) approaches and systematically refining the best ones.
sorves crosswo.a pu,,i., isa
::T::,.i1:Tfl:T,t:: bene"r than,;;;h';u",, using consrrainb o AI has advanced more rapidly in the past decade because of greater use of the scientific
inc,uding ai",ionJ", u;:]li::1ffi:::l ltri!:i]n
;:jff:,rix;iJ;;i,,i,,it ,h",;;;,".Ni.. sto.y;;;" sorved .,ErAGE,,
:i*;X*#iilTffi :h; r
method in experimenting with and comparing approaches.
Recent progress in understanding the theoretical basis for intelligence has gone hand in
hand with improvements in the capabilities of real systems. The subfields of AI have
recognizes,r,u,,r,.pun'Jld,'lfury*::,JTiHi:,ili**m*ldi,n"J# become more integrated, and AI has found common ground with other disciplines.
program does not know
that Nice is u.ityirrr.-ce,
These are just a few
but it can sorve the :,
exampres of artificiar
interigence sysrems
*;, Bts"ltocnepHrcAL AND HrsroRrcAL Norns
roday. Nor
fr1il.ffi:.",T,|l::h:::*ir'"' '.i*"", "nein.",ing, uniru,r,".n,,i"r,";;,ro which this

The methodological status of artificial intelligence is investigated in The Sciences of the Ar-
tif.cial, by Herb Simon (1981), which discusses research areas concerned with complex ar-
tifacts. It explains how AI can be viewed as both science and mathematics. Cohen (1995)
gives an overview of experimental methodology within AI. Ford and Hayes (1995) give an
This chapter defines AI opinionated view of the usefulness of the Turing Test.
and establishes the cu
background against which Artificial Intelligence: The Very ldea,by John Haugeland (1985) gives a readable ac-
oped' some
ill;;""ant poinrs ar. us rortolrl,'rful it has dever-
count of the philosophical and practical problems of AI. Cognitive science is well described
. by several recent texts (Johnson-Laird, 1988; Stillings et at.,1995; Thagard, 1996) and by
_-o vr uv,,svrur j
suesrions ro ask are: Are you the Encyclopedia of the Cognitive Sciences (Wilson and Keil, 1999). Baker (1989) covers
r-lo you want to model
ideal standard ? humans or work from
an the syntactic part of modern linguistics, and Chierchia and McConnell-Ginet (1990) cover
o In this book' we adopt semandcs. Jurafsky and Martin (2000) cover computational linguistics.
the view that intelligence
action' Ideatv, an. intetigenr is concerned mainry with
rationar Early Ai is described in Feigenbaum and Feldman's Computers and Thought (1963),
wit study rhe probrem uriro;"ilg'ir",nu,
"g"nii"r.", the best prrrib; ;;;;n in a siruation.
we Minsky's Semantic Information Processing ( 1968), andthe Machine Intelligence series edited
"r * inrerigent in rhis sense. by Donald Michie. A large number of influential papers have been anthologizet by Webber