Académique Documents
Professionnel Documents
Culture Documents
91q
w
1
I INTRoDUCTToN
977
977
978
)'19
)81
)83
'84
84
. In which we try to explain u,hy we consider artificial intelligence to be a subject
most worthy of study, and in which we try to decide what exactly it is, this being a
85
good thing to decide before embarking.
85
We call ourselves Homo sapiens-rnan the wise-because our mental capacities are so im-
. portant to us. For thousands of years, we have tried to understand how we thilk; 1|1;at is, how
a mere handful of stuff can perceive, understand, predict, and manipulate a world far larger
ffiflH* and more complicated than itself. The field of artificial intelligence, or AI, goes further still:
it attempts not just to understand but also to build intelligent entities.
AI is one of the newest sciences. Work started in earnest soon after World War II, and
the name itself was coined in 1956. Along with molecular biology, AI is regularly cited as
the "field I would most like to be in" by scientists in other disciplines. A suJent in physics
might reasonably feel that all the good ideas have already been taken by Galileo, Newton,
Einstein, and the rest. AI, on the other hand, still has openings for several
full-time Einsteins.
AI currently encomPasses a huge variety of subfields, ranging from general-purpose
areas, such as learning and perception to such
specific tasks as playing chess, proving math-
emadcal theorems, writing poetry, and diagnosing diseases.
AI systematizes and automates
intellectual tasks and is therefore potendJly relevant
,o uny ,ph"r. of human intellectual
- activity. In this sense, it is truly a universal field.
that think like humans O knowledge representation to store what it knows or hears;
Systems that think rationallv [!8HE#o'*
"The exciting new effor1 to make
comput_ "The study of merltal faculties through e automated reasoning to use the stored information to answer questions and to draw
ers think . . . machines with minds, in
the f,lsI8 new conclusions;
the use of computational models."
full and literalsense." (Haugeland, l9g5) (Charniak and McDermon, l9g5) O machine learning to adapt to new circumstances and to detect and extrapolate pattems.
MActdNE
tf FNtrG
"[The automation ofl activities thar we "The study of the computations that make Turing's test deliberately avoided direct physical interaction between the interrogator and the
associate with human thinking, activities
it possible to perceive, reason, and act." computer, because physical simulation of a person is unnecessary for intelligence. However,
such as decision-making, problem solv_ (Winston, 1992) TOTAT TUNi6 IEST the so-called total Tirring Test includes a video signal so that the interrogator can test the
ing, learning..." (Bellman, l97g) subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects "through the hatch." To pass the total Turing Test, the computer will need
Systems tbat act like humans
"The art of creating machines that per_ @ilPUIEB VISION Q computer vision to perceive objects, and
"Computational Intelligence is the study
form functions that require intelligence msollcs O robotics to manipulate objects and move about.
of the design of intelligent agents." (poole
when performed by people." (Kurzweil, et al.,1998) These six disciplines compose most of AI, and Turing deserves credit for designing a test
r990) that remains relevant 50 years later. Yet AI researchers have devoted little effort to passing
"The study of how to make computers do the Turing test, believing that it is more important to study the underlying principles of in-
"AI is concerned with intelligent
...
things at which, at the moment, people are
be-
telligence rhan to duplicate an exemplar. The quest for "artificial flight" succeeded when the
havior in artifacts." (Nilsson, l99g)
better." (Rich and lfuight, l99l) Wright brothers and orhers stopped imitating birds and learned about aerodynamics. Aero'
naurical engineering texts do not define the goal of their field as making "machines that fly
Figure l-l Some definitions of artificiar interigence, organized
into four ."*ffi so exactly like pigeons that they can fool even other pigeons-"
Historically, all four approaches to AI have been Thinking humanly: The cognitive modeling approach
followed. As one might expect, a
tension exists between approaches centered around
humans and approaches centered around If we are going to say that a given program thinks like a human, we must have some way of
rationality'l A human-centered approach must be
an empirical science, involving hypothesis determining how humans think. We need to get inside the actual workings of human minds.
and experimental confirmation. A rationalist
approach involves a combination of mathemat- There are two ways to do this: through introspection-trying to catch our own thoughts as
ics and engineering- Each group has both aispaiageo
and helped the other. Let us look at the they go by-and through psychological experiments. Once we have a sufficiently precise
four approaches in more detail.
theory of the mind, it becomes possible to express the theory as a computer program- If the
progmm's input/output and timing behaviors match corresponding human behaviors, that is
Acting humanly: The Tirring Test approach
evidence that some of the program's mechanisms could also be operating in humans. For ex-
lu'tf*TEsr Tbe Thring Test, proposed by Alan Tiuing (1950), ample, Allen Newell and Herbert Simon, who developed GPS, the "General Problem Solve/l
was designed to provide a satisfactory
operational definition of intelligence, Rather than
proposing u t,rng and'perhaps controversial (Newell and Simon, 196l), were not content to have their program solve problems conectly.
list of qualifications required for intelligence, he
suggested a test based on indistinguishability They were more concerned with comparing the trace of its reasoning steps to traces of human
cocifltvE
from undeniably intelligent entities-human ueingilme SCIENC subjects solving the same problems. The interdisciplinary field of cognitive science brings
compurcr passes the test if a human
interrogator, after posing some wriften questionslcannot
tell whether the written responses together computer models from AI and experimental techniques from psychology to try to
come from a person or not. chapter 26 discusses construct precise and testable theories of the workings of the human mind'
the details of the test and. whether a
compurer
is really intelligent if it passes. For now, we
note that programming a computer to pass the tesr Cognitive science is a fascinating field, worthy of an encyclopedia in itself (Wilson
provides plenry to work on- The computer would
need to possess the following capabilities: and Keil, 1999). We will not attempt to describe what is known of human cognition in this
lmslilc*c 0 natural language processing book. We will occasionally comment on similarities or differences between AI techniques
to enable it to communicate successfiilly in English.
and human cognition. Real cognitive science, however, is qecessarily based on experimental
r we should poinr out that, by distinguishing between
human and rationalbehavior, we are not suggesting rhar investigation of actual humans or animals, and we assume that the reader has access only to
humans are necessarily "irrational" in the sense
of'emotionally unstable,, or ..insane.,, one merely need note a computer for experimentation.
that we are not perfect: we are not all chess grandmasters,
even those of us who know all the rules of chess; and,
gers an A on the exam. Some svstemati. In the early days of AI there was often confusion between the approaches: an author
ffit#"1t:tr';:1,;#-" "rroo
in human reasoning are caatoged bI ,.
would arsue that an alsorithm ne.rforms well on a task.and rhat it is thercfnrp a scffl model
F.S.Y
I
i
1 Chapter l. Introductioq
Sectionl.2.TheFoundationsofArtificiallntelligence
of hurnan perfclrnrance, or vice versa. Modenr authors separate the two kinds of claims: All the skills needed for the Turing Test are there to allow rational actions. Thus, we
this distinction has allowed both AI and cognitive science to develop more rapidly. The twq need the ability to represent knowledge and reason with it because this enables us to reach
fields continue to flertilize each other, especially in the areas of vision and natural language, good decisions in a wide variety of situations. We need to be able to generate comprehensible
Vision in particular has recently made advances via an integrated approach that considers sentences in natural language becauSe saying those sentences helps us get by in a complex
neurophysiological evidence and computational models. society. We need learning not just for erudition, but because having a better idea of how the
world works enables us to generate more effective strategies for dealing with it. We need
Thinking rationally: The ..laws of thought" approach
visual perception not just because seeing is fun, but to get a better idea of what an action
The Greek philosopher Aristotle was one of the first to anempt ro codify "right thinking," might acbieve-for ex4pple, being able to see a tasty morsel helps one to move toward it.
thar
is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures For these reasons, the study of AI as rational-agent design has at least two advantages.
that always yielded colrect conclusions when given correct premises-for example, "socrates First, it is more general than the "laws of thought" approach, because colrect inference is just
is a man; all men are mortal; therefore, Socrates is mortal." These laws of thought were one of several possible mechanisms for achieving rationality. Second, it is more amenable to
supposed to govern the operation of the mind; their study iniriated the field called logic. scientific development than are approaches based on human behavior or human thought be-
Logicians in the l9th century developed a precise notation for statements about all kinds cause'the standard of rationality is clearly defined and completely general. Human behavior,
of things in the world and about the relations among them. (Contrast this with ordinary arith- on the other hand, is well-adapted for one specific environment and is the producl in part,
metic notation, which provides mainly for equality and inequality statements about numbers.) of a complicated and largely unknown evolutidnary process that still is far from producing
By 1965' programs existed that could, in principle, solve any solvable problem described perfection. This book will therefore concentrate on general principles of rational agents and
in
logical notation.2 The so-called logicist tradition within artificial intelligence hopes to build on components for constructing them,. We will see that despite the apparent simplicity with
on such programs to create intelligent Systems. which the problem can be stated, an enonnous variety of issues come up when we try to solve
There are two main obstacles to this approach. First, it is not easy to take informal it. Chapter 2 outlines some of these issues in more detail-
knowledge and state it in the formal terms required by logical notation, particularly when the One important point to keep in mind: Wewill see before too long that achieving perfect
knowledge is less than l00o/o certain Second, there is a big difference between being able ro rationality-always doing the right thing-is not feasible in complicated environments. The
solve a problem "in principle" and doing so in practice. Even problems with just a few computational demands are just too high. For most of the book, however, we will adopt the
dozen
facts can exhaust the computational resources of any computer unless it has some guidance working hypothesis that perfect rationality is a good starting point for analysis. It simplifies
as to which reasoning steps to try first. Atthough both of these obstacles the problem and provides the appropriate sening for most of the foundational material in
apply to any attempt
to build computational reasoning systems, they appeared first in the logicist tradition.
UMITED
MI'IOITAUTY the field. Chaprers 6 and l7 deal explicitly with the issue of limited rationality-acting
appropriately when there is not enough time to do all the computations one might like.
Acting rationally: The rational agent approach
An agent is just something that acts(agent cornes from the Latin agere,todo). But computer CIAL INTN
agents are expected to have other attributes that distinguish them from mere '.programs,"
such as operating under autonomous control, perceiving their environment, persisting
over a
prolonged time period, adapting to change, and being capable oftaking on anorher,s ln this section, we provide a brief history of the disciplines that contributed ideas, viewpoints,
goals. A
RATIONAI- AGENT rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, and techniques to AI. Like any history this one is forced to concentrate on a small number
the best expected outcome. of people, events, and ideas and to ignore others that also were important. We organize the
In the "laws of thought" approach to AI, the emphasis was on corect inferences. Mak- history around a series of questions. We certainly would not wish to give the impression that
ing corect inferences is sometimes parr of being a rational agent, because one way to act these questions are the only ones the disciplines address or that the disciplines have all been
rationally is to reason logically to the conclusion.that a given action will achieve one;s goals working toward AI as their ultimate fruition.
and then to act on that conclusion. On the other hand, conect inference is not
all of ratio- Philosophy (428 n.c.-present)
nality, because there are often situations where there is no provably correct thing to
do, yet
something must still be done. There are also ways of acting rationally that cannot
be said ro e Can formal rules be used to draw valid conclusions?
involve inference- For example, recoiling from a hot stove is a reflex action that is usually o How does the mental mind arise from a physical brain?
more successful than a slower action taken after careful deliberation. o Where does knowledge come from?
" If ,h*" i. *lrL,*J6" program might never stop lookine for one
""
Chapter l. Intloducdon
1.2. The Foundations of Arti{icial Jntelligence
S"ction
Aristotle (384-322 B.c.) was thc first to forrnulate a preci-se set of laws goveming the
ratio-
i:al part of the niincl. He developed rt infornral systern of svllogisms fbr proper ."uso,1i,,o the Worlrt (1928) defined an explicit computational
procedure for extractii'rg knowledge from
which in principle allorved one to generaie conclusions mechanically, given initial prernisei' elementary experiences. It was probably the first thcoly of tnind as a cornputatiottal process-
Much later, Ratnorr Lull (d. l-3 l5) had the idea that uscfui reasoning could acrually be The final element irr the phil<lsoptrical picture of the rnind is the cclnncction between
can.ied
action as well
out by a nlechanical a-rtifect. His ''concept rvheels" are on the cover of this book. ttronras knowledge and action. This question is vital to AI, because intelligence requires
Hobbes ( l-588-1679) ploposed that rea.soning wzrs like numerical compurarion, rhat ,.ure
adc as reasoning. Moreover, onty by understanding how actions arejustified can we understand
and subtract in our silent th<;ughts." The automation of computation itself was already how to build an agent whose actions are
justifiable (or rational). Aristotle argued that actions
wep
under wety; arround 1500, Leonardo da Vinci (1452'1519) designed but did nor build are justified by a logical connection between
goals anci knowledge of the action's outcome
a me-
chanical calculator; recent reconstructions have shown the design to be functional. The
first (the last part ofthis extract also appears on the fi'ont cover ofthis book):
known calculating rnachine was constructed around 1623 by the Gennan scientist Wilhelm '
But how does it happen that thinking is sometimes accompanied by action and sometimes
Schickard (1592-1635), although the Pascaline, built in 1642by Blaise pascal (1623-t664;, not, sometimes by motion, and sometinres not? it looks as if almost the same thing
is mor:e famous. Pascal wrote that'1he arithrnctical machine produces effects which appear happens as in the case ofreasoning and making inferences about unchangiug objects. But
nearcr io thought than all the actions of animals." Gottlried Wilhelm Leibniz (1646-1716) in that case the end is a speculative proposition . .. whereis here the conclusion which
built a mechanical device intended to cary out operations on concepts rather than nr1nlr"rs, results fronr the two premises is an action. . . . I need covering; a cloak is a covering. I
need a cloak. What I need, I have to make; I rteed a cloak. I have to make a cioak- And
but ils scope was rather limited.
the conclusion, the "l have to make a cloak," is an acticn- (Nussbaum, 1978' p. 40)
Now that ',ve have the idea of a set of rules that can describe the formal, rational part
of the mind, the next step is to consider the mind as a physical system. Ren Descartes ln lhe Nicomachean Ethics (Book III. 3, I I l2b), Aristotle further elaborates on this topic,
( I 596- 1650) gave the first clear discussion of the suggesting an algorithm:
distinction between mind and matter and of
the problerns that arise. One problem rvith a purely physical conception of the mind is We deliberate not about encis, but about rneans. For a doctor does not deliberate rvhether
that it
seems to leave little room fbr fiee will: if the mind is governed entirely by physical laws, he shall heal, nor an ol?tor whether he shall persuade, ...
They assume the end and
then :. ifit
seems easily and best produced
it has no more iree will than a rock "cleciding" to fall toward the center of the earth. Althougtr consider how and by what means it is attained, and
. ,
tlrereby; whiie if it is achieved by one rnearrs <lnly they consider irow it will be achieved
a strong advocate of the power of reasoning, Descartes was also a proponent of dualisrn. He
r by this and by what means tlrn will bc achieved, till they come to the first cau.se, . . . and
held that there is a part of the human mind (or soul or spirit) that is outside of natue, exempt
l what is last in the order of analysis scems to be first in the order of becorning. And if we
from physical larvs. Animals, on the other hand, did not possess this dual quality; they couid i,. come on an irnpossibility, we give up the search, e.g. if we need money and this cannot
MATERIA].ISM be treated as machines. An alternative to dualism is materialisrn, u,hich holds that the brain's be got; but if a thing appears possible wc try to do it.
operationaccordingtothelawsof physics constitutesthemind. Freervill issimplytheway .'
I ,
Aristotle's algorithm was implemented 2300 years iaier by Newell and Simon in their GPS
that the perception of available choices appears to the choice process. ,-
program. We would now call it a regression planning system. (See Chapter I l.)
Civen a physical rnind that manipulates knowledge, the next problem is to estabtish the j. Goal-based analysis is useful, but does not say what to do when several actions will
MPrRrCr$t source of knowledge. The ernpiricisrn movement, starting with Francis Bacon's (1561-1626) i,
, achieve the goal, or when no action will achieve it completely. Antoine Arnauld (1612-1694)
1- "
,
Novum Organum,3 is characterized by a dictum of John Locke (1632-1704): "Nothing is in correctly described a quantitative formula for deciding what action to take in cases like this
i
.
the understanding, which rvas not first in thc senses," David Flume's ( l7l l-1i76) A Treaise : ,-.,:, ., (see Chapter l6). John Stuart Mill's (1806-1S73) booklltilitarianism(Mill, 1863) promoted
. .'. i, ..
!
NOUCNON of Htunan Natttre (Htrme, 1739) proposed what i.s now known as the principle of induction, ii. the idea of rational decision criteria in all spheres of human activity. The more formal theory
that general rules are acquired b-v exposure to repeated associations U"rr"""n ttoi, , l.j, , ,-]'' . of decisions is discussed in .the following section.
Building on the work of Ludwig wittgenstein (t889-1951) and Bertrand Russeil "i"rnrn*.
(tTiz- [i ,'.;;,,,,.
1970), the famous Vienna Circle,led by Rudolf Car-nap (1891-1970), developed
rhedoctrine f;' ,
::i Mathematics (c. 8fi)-present)
:'
of fogical positivisrn. This doctrine holds that all knowledge can be characteiized by logical "
" ' .' ,,,, j
.ocrcAr PoslTrvrsM
ESEFVATIOI'I
c What are the formal rules to draw valid conclusions?
iEI{TENCS theories connected, ultimately, to observation sentences that conespond to sensory inputs.a ,. , ,..,r,,,,,
,
to the actions of other agents as individuals. For'"small" economies, the situation is mucfi
more like a game: the actions of one player can significantly affect the utility of anothq
(either positively or negatively). Von Neumann and Morgenstern's development of gary
GAME THEOFY theory (see also Luce and Raiffa, 1957) included the surprising result that, for some gameE Axon from another cell
a rational agent should act in a random fashion. or at least in a way that appears randorn h
the adversaries.
For the most part, economists did not address the third question listed above, namely.
how make rational decisions when payoffs from actions are not immediate but instead re.
to
sult from several actions taken in sequence. This topic was pursued in the field ofoperatiol5
OPERATIOIIS
RESEARCH research, which emerged in World War II from efforts in Britain to optimize radar installa.
tions, and later found civilian applications in complex management decisions. The work of
Richard Bellman (1957) formalized a class of sequenti4l decision problems called Mankov
decision processes, which we study in Chapters 17 and21. Cell bodY or Soma
Work in economics and operations research has contributed much to our notion of ra.
tional agents, yet for many years AI research developed along entirely separate paths. One ffifanervecellorneuron.Eachneuronconsistsofacellbody,
t ttti u:i""n'-1i
reason was the apparent complexity of rnaking rational decisions. Herbert Simon (191G or soma, that contains """""tr",e nber caJred ","::':i" R:::g
rhe'axon. rhe axon tr il:lTJ;:
stretches out ror
2001), the pioneering AI researcher, won the Nobel prize in economics in 1978 for his early n:nX,,t5::::fili i,l"; ,i"ei"
tt'u-n slcale in this diagram indicates' Tlpically they are I
a long distance, much longJr I meter' A neuron
work showing that models based on satisficing-making decisions that are "good enough," d;;;;ile'tt" cell bodv)' but can reach up to
sAnsF|cltic
Computer
Human Brain thought processes. The careful controls went a long way
I Co*pu,urion.ol ,nio - I CPU, 108 gates dve task while introspecting on their
of the data made it unlikely
Storage unirs
t0Io bits RAM
10ll neurons t*arO making psychology a science, but the subjective nature
I her own theories' Biologists studying
10rr neurons *rat an experimenter would ever disconfirm his or
t0rr bits disk introspective data and developed an objective
Cycle time 1014 synapses animat behavior, on the other hand, lacked
l0-9 sec by H. S. Jennings (1906) in his influential work Behavior of the
Bandwidth 10-3 sec ,n"rfroAotogy, as described
10ro bits/sec to humans, the behaviorism movement, led by
Memory updates/sec 10r4 bits/sec Lower organisns- Applying this viewpoint
i t0e 8E$UORlSl'l
jonn Wutson (1878-1958), rejected cny theory involving mental processes on the grounds
1014
trgure l.J A crude cr)moa f that introspection could not provide reliable evidence.
Behaviorists insisted on studying only
(circa 2Cf'3) and brains. r!.
since the first edition of rh
;;;il; ;ources
ffi ;";iilfr:ili:"; :ased byavailable to computers
at least a factor of l0
i objective measures of the percepts (or stimulu.s) given to an animal and its resulting actions
to do so a gain this decade.
i
(oi response). Mental constructs such as knowledge, beliefs, goals, and reasoning steps were
numbers have The brain,s jismissed as unscientific "folk psychology." Behaviorism discovered a lot about rats and pi-
";.;,;;*','"T:?:llriffio,T::. I
geons, but had less success at. understanding humans. Nevertheless, it exerted
a strong hold
are sd'a on psychology (especially in the united states) from about 1920 to 1960.
ffil:i.":#:1"fi'.:::,[iTJililT:,f rong wav rrom understanding
how any
The view ofthe brain as an information-processing device, which is a principal charac-
The truly amazing conclusion
is that c co.llection teristic of cognitive psychology, can be traced back at least to the works of William Jamesl0
acilon, and consciouJnesJ or, of simple ceils can lead
in other words, that brains ,ou,r"-ri*^(Searle, b thought, ffiff*' (1g42-1910). Helmholtz also insisted that perception involved a form of unconscious log-
1992). The
rhere is *',,ny,,i""r ical inference. The cognitive viewpoint was largely eclipsed by behaviorism in the United
il.t#irlJi"tr,}::iffil,T:,f,i}m:-th* rearm in which minds
Srates, but at Cambridge's Applied Psychology Unit, directed by Frederic Bartlett
(188G
1969), cognitive modeling was able to flourish. The Nature of Explanation, by Bartlett's
different rasks and have different
.t**"ti1ii:11fl:1il:,:TJlfitrx"*tuite properries
sfudent and successor Kenneth Craik (1943), forcefully reestablished the legitimacy of such
are sares in the cpU or a tvpicar's;
; ;ff*:',:il::.Ti ?ffry:iln';:l m** "mental" terms as beliefs and goals, arguing that they are just as scientific as, say' using
pressure and temperature to talk about gases, despite their being made of molecules that have
2020 or course,,i,t
tii e can be inre; neither. Craik specified the three key steps of a knowledge-based agent: (l) the stimulus must
difference i n, *i t"ii n g speed
;:; and i n p-un. "" jffi:ilffijt;
TiH,"Jfr1Ifi:J; be translated into an internal representation, (2) the representation is manipulated by cogni-
":H';::l':H:ififf[T:?X,::#::Tl
in a nanosecond, whereas neurons :
r,ff m{$8,**.,.,#ll"i},:i:ffi lf
ffi
j'#i:":"il:ffiff -,il:Jilt*1"ru;f The birth of artificial intelligence (1956)
processing. The problem
oi unde
i
anding language soon
f :,T:,ffi princeton was home to another influential figure in AI, John McCarthy. After graduation,
complex than ir seemed turned out to be.onrlilr*ty
in 1957. rnon, McCarthy moved to Dartmouth College, which was to become the official birthplace of the
subjectmatter"na*nt'*i,nor
j;:i:ili#i*""?T'.';11ffift field. McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him
seem obvious' but it was not widely-appre","r"Jr"",,r
,r," I960s. Much of the
*,:*:::li,'Jlff' bring together U.S. researchers interested in automata theory, neural nets, and the study of
representation
gtrre stuav-or
noru ," ;y work in intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956.
I^1":"1-r: was tied to o"i'i""r","dge into a form ,i",1 .ornor,o
There were l0 attendees in all, including Trenchard More from Princeton, Arthur Samuel
.with) informed iy ,"*.*"r,
connecred ln turn to decades '*t:,lt: i:o in ringuisrics, which
of work on the philosopii.,"f war from IBM, and Ray Solomonoff and Oliver Selfridge from MIT.
_ufysis of language.
T[vo researchers from Carnegie Tech,l3 Allen Newell and Herbert Simon, rather stole
the show. Although the others had ideas and in some cases programs for particular appli-
cations such as checkers, Newell and Simon already had a reasoning program, the Logic
Theorist (LT), about which Simon claimed, "We have invented a computer program capable
*'th the background material of thinking non-numerically, and thereby solved the venerable mind-body problem."la Soon
behind us' we are ready
to cover the deveropment after the workshop, the program was able to prove most of the theorerns in Chapter 2 of Rus-
of AI itserf.
The gestation of artificial sell and Whitehead's Principia Mathematica. Russell was reportedly delighted when Simon
intelligence (194!1955)
showed him that the program had come up with a proof for one theorem that was shorter than
The first work that
is now oFnprar,, the one in Principia. The editors of the Journal of Symbolic Logic werc less impressed; they
wa,,erp';..lil;;"di,-,:rii;ffi rejected a paper coauthored by Newell, Simon, and Logic Theorist.
funcrion of neurons in
the brain; u io-ul
;,;:,etyffi :#;3#;:
*iryrr
li:iff The Dartmouth workshop did not lead to any new breakthroughs, but it did introduce
"r or"o"ri,L"^a r"gic due to Russe'
whitehead; and rurins-'s,*"oo
;i ;"-;;* and all the major figures to each other. For the next 20 years, the field would be dominated by
T"v i."i"r#r"r'", *fi ciar neurons these people and their students'and colleagues at MIT, CMU, Stanford, and IBM. Perhaps
:",j:ff ffi"ff T[']iff _,1H:*#r;il*:r"*:.i,,*,*::::i[i,:$ ing. Also in 1958, McCarthy published a paper entitled Programs with Common Sense, in
,ilil11?::::T:T,:::.:i1iyi,,_,r,
that wili rtrnction aut,onomousrv
;;o
in.o*pr.^,
;,J;ilT#l,l"j*1lt:"orro
to,,,;J,T:1T,lj:::f,il
rhe onry rierd
buird,r,,.;';i:
*ii.t, t. described the Advice Taker, a hypothetical program that can be seen as the first
complere AI system. Like the Logic Theorist and Geometry Theorem Prover, McCarthy's
problems. But unlike the
program was designed to use knowledge to search for solutions to
Early en thusiasmr great expecta :, ottrers, it wa-s to embody general knowledge of the world. For example, he showed how some
tions ( I 952_1969) the program to generate a plan to drive to the airport to catch
simple axioms would enable
The earry years of AI in the normal
t,
were full of successes-in a plane. The program was also designed so that could it accept new axioms
a limited wAv f-ri'o^ rL^
and programming
of th tools -.-: of operation, thereby allowing it to achieve competence in new areas without being
wereseenas,hings,n,,.";:,1*{:J:}rqii:,j.,i"I;l*::;ilJ,:",".ilii$. "ourr.
reprogra,runed. The Advice Taker thus embodied the central principles of knowledge repre-
sentation and reasoning: that it is useful to have a formal, explicit representation of the world
,Hy,,:T: frl
by Turing.) AI researche^
ver.,n", J, *,
I1lTffTo ntu"t do x'" ",(See
rer y c re
"",r :
chapter
s talli h m;;
;,,re, pre f ened,
26 for ar""g riri X,s
s
;;;; and of the way an agent's acfions affect the world and to be able to manipulate these repre-
nu, garrrered i sentations with deductive processes. It is remarkable how much of the 1958 paper remains
M"c;;;";;filil;T:'rallv responded bv demonsrrarins
on. "i another.
afrer i relevant even todaY.
^i3fi:trifiH:,1Tit
u' il'nuna,r" 'r'''r*t,ili "Iu
John :
;, 1958 also marked the year that Marvin Minsky moved to MIT. His initial collabora-
tion with McCarthy did not last, however. McCarthy stressed representation and reasoning
l:ff ::iilff n*#l;r;;:;#;]i#J$i:r]1":1:':"rtri*iHil* in formal logic, whereas Minsky was more interested in getting programs to work and even-
that the order in which it could handle, rr rurned tually developed an anti-logical outlook. In 1963, McCarthy started the AI lab at Stanford.
the or";;;;;;"J;",::iTt-:I,puzzles our
that in which h";;;;;;J'."r:Til"':il$.ffJ:::+s and possibr. u.,ro^ *u, sirnirar to
His plan to use logic to build the ultimate Advice Taker was advanced by J. A. Robinson's
t* 'o."lo:ov
ttr^e "tuatini nrn,,unry,, approach. l::9tt-was probably the firsr pro discovery of the resolution method (a complete theorem-proving algorithm for first-order
,n.
- grams as models of
cognition I.o Nr*lr ,i5 of GPS and subsequenr logic; see Chapter 9). Work at Stanford emphasized general-purpose methods lbr logical
"Ti- )li-t:.ttt"
PHYs'cA'lsYMBo'(ilil:,"#,ffi pro-
reasoning. Applications of logic included Cordell Green's question-answering and planning
f*Hg[Hn3;;;,ry;;;i,i[l"?ilil*",'IT*r:,:y,'ffi
ormachine)exhibitingintetligencemu;-;;"':'.''Jj::.l"IT.T,isthatanySystem(human
systems (Green, 1969b) and the Shakey robotics project at'the new Stanford Research Insti-
tute (SRI). The latter project, discussed further in Chapter 25, was the first to demonstrate the
complete integration oflogical reasoning and physical activity.
"'.-T?H;1*l;:Tjtl[ijry''ffi:1T,iJ,T#'.ff i,':f,:n*Xi;ln# Minsky supervised a series of students who chose limited problems that appeared to
srams. Herbert ffi;;;;;: I
c"l";;';l"e5ef:#rt#.["';':t;::,t-rr."a"'o." J
abletoPr3v1the-oremsthat'uny,,uo.ntsofmathemu,,|"4TheoremProver,whichwas
urpv,onr.m require intelligence to solve. These limited domains became known as microworlds. James
Slagle's SRtNr program (1963a) was able to solve closed-form calculus integration problems
typical of first-year college courses. Tom Evans's ANALocY program (1968) solved geomet-
;J#;#r,:::;ffi rJffi ;ffi :?il'Ji;j#d.*T#:"tr"f
;:#::: "#,:;HHfr: :"ji:Ti:^e.:;;;"",ii,.T* to p,ay a berter#l"";;Ty;
game
ff ,han
ric analogy problems that appear in IQ tests, such as tlre one in Figure 1.4. Daniel Bobrow's
Sruopnr program (1967) solved algebra story problems, such as the following:
If the number of customers Tom gets is twice the square of 20 percent of the number
:',onst,p.".,sion.r'r,r,'r;ill'.fi :.r;;i:*ffi r:*:ily;T.;*trj#;\:#
he used machines that '
of advertisements he runs, and the number of advertisements he runs is 45, what is the
were t'iji ,"r,i"g n"". iiito
on tr," number of customers Tom gets?
and chapter zt oesc.ri?s;;; ", manufacruring prant.
Chapter 6
ilffiiT'plaving' #il, on the rearning techniques
used
,
The most famous microworld was the blocks world, which consists of a set of solid blocks
John Mccarthy moved l placed on a tabletop (or more often, a simulation of a tabletop), as shown in Figure 1.5.
tions in one historic
"ot
vear: r 958. ?Stguth MIT and rherema{e
InMn
to
three cruciar contribu- A typical task in this world is to reanange the blocks in a certain way, using a robot hand
Lrb M;;;; *"r i, ,."*y , -
that can pick up one block at a time. The blocks world was home to the vision project of
ranguage Lisp' which r^ ,o i.ror. :{I defined me nigr_teuer
tt" aorinr.,t;;;;r..r_ing David Huffman (1971), the vision and constraint-propagation work of David Waltz (1975),
ranguage. Lisp is the
thd learning theory of Patrick Winston (1970), the natural language understanding program
The History of Artificial Intelligence
i-I-l 1960; Widrow,1962), who called his networks adalines, and by Frank Rosenblatt (1962)
with his perceptrons. Rosenblatt proved the penceptnom convergence theorern, showing
tAl
l-l
that his learning algoritlrm could adjust the connection strengths of a
input data, provided such a match existed. These topics arc covered
perceptron to match any
in Chapter 20.
[_--l l-r--rl l-rN I-_^l From the beginning, AI researchers were not shy about making predictions of their coming
lz
/-r^l
-
lrl
lt Il(a)t
I
Figure 1.4
t/ \l
\--l N/l l/--t
)
An example problem solved
3
I
4 5
I
successes. The following statement by Herbert Simon in 1957 is often quoted:
It is not my aim to surprise or shock you-but the simplest way I can summarize is to say
that there are now in the world machines that think, that learn and that create. Moreover,
ttreir ability to do these things is going to increase rapidly until-in a visible future-thc
range of problems they can handle will be coextensive with the range to which the human
mind has been aPPlied-
:d bv ;:;;::
Fvanc," ANALocy
by Evans,s program.
Terms such as "visible future" can be interpreted in varjous ways, but Simon also made a
more concrete prediction: that within 10 years a computer would be chess champion, and a
significant mathematical theorem would be proved by machine. These predictions came true
(or approximately true) within 40 years rather than 10. Simon's over-confdence was due
to the promising performance of early AI systems on simple examples. In almost all cases,
however, these early systems turned out to fail miserably when tried out on wider selections
of problems and on more difficult problems.
The first kind of difficulty arose because most eaily programs contained little or no
knowledge of their subject matter; they succeeded by means of simple syntactic manipula-
tions. A typical story occrured in early machine translation efforts, which were generously
funded by the U.S. National Research Council in an attempt to speed up the translation of
Russian scientific papers in the wake of the Sputnik launch in 1957. It was thought ini-
tially that simple syntactic transformations based on the grammars of Russian and English,
and word replacement using an elecffonic dictionary, would suffice to preserve the exact
meanings of sentences. The fact is that translation requires general knowledge of the subject
matter in order to resolve ambiguity and establish the content of the sentence. The famous
re-translation of "the spirit is willing but the flesh is weak" as "the vodka is good but the
meat is rotten" illustrates the difficulties encouqtered. In 1966, a report by an advisory com-
mittee found that "there has been no machine translation of general scientific text, and none
Figure 1.5 A saene from the blocks worl..
,"
is in immediate prospect." All U.S. government funding for academic translation projects
the command' "n"o t ii"it has just completed was canceled. Today, machine translation is an imperfect but widely used tool for technical,
which is tatter than ,n"e one
you are holding and put
"lY.1Y]""grad'.1972) lt in rhe box.,, commercial, government, and Internet documen ts.
The second kind of difficulty was the intractability of many of the problems that AI was
of Terry Winograd (1g72),and attempting to solve. Most of the early AI programs solved problems by trying out different
the planner of Scom Fahlman
Early work building on the (1974).
neural networks oruccuitoctr'ano pitts combinations of steps until the solution was found. This strategy worked initially because
The work of winograd and.cowan arso flourished.. microworlds contained very few objecs and hence very few possible actions and very short
collectivery represenr an individuar
rrq6:J ,i"red how a rarge number
of eremenrs could. solution sequences. Before the theory of computational complexity was developed, it was
*tin u.o**nding increase jn
parallelism' Hebb's learning
methods ";";;;;
*.ir robusrness and , widely thought that "scaling up" to larger problems was simply a matter of faster hardware
by Bernie widrow (widrow
"nnun".d and Hoff and larger memories. The optimism that accompanied the development of resolution theorem
Intelligence
22 Chapter l. Introduqi* The History of Artificial ..\
_-< irl
't
proving, for example, was soon darnpened when researchers failed to prove theorems inv61;* Forexample,themassspecFummightconuinapeakatTp=lS,correspondingtothemass
tn" I
ing more than a few dozen facts. The fact that a progrcun canfincl a solution in principte of a methYl (CHs) fragment' generated all-posible strucrures consistent fm i
program
The naive version of the
4oi,,,
i
not mean that the program contains any of the mechanisms needed to for each' comparing this
fnd it in practice.
and then predicted *t,^,
*ur, ,i".o"t *""ra be ob-served i
The illusion of unlinrited cornputational power was not confined to problem-solv1x, formula, decent-sized molerules'
As one J-n"tt, this is intractable for look- ;
MACHINE EVO{.WION programs. Early experiments in machine evolution (now called genetic algorithms) (Frisf with the acual specrrum. -*,ei, that they worked by
berg, I958; Friedberg et al., 1959) were based on the undoubtedly correcr belief thar The D'NpReL researchers
ronrutr"iffiri.or .r',"*iro and found
br
a ltetone (c=o)
making an appropriate series of small mutations to a machine code program, on" .un g.n,i subgroup
ingforwell.knownpatternsofpeaksinth",p".t.u*tlratsuggestedcomjnollsubstructuresin
n"ie is used to
ttre following
ate a program with good performance for any particular simple task. The idea, then, was the molecule. For example, '""ogniz'e
h
try random mutations with a selection process to preserve mutations that seemed useful. fg. (which weighs 28):
spite thousands of hours of CPU time, almost no progress was demonsrrated. Modern genetil
if there are two peaks at
r1 and' so such that
whole molecule);
algorithms use better representations and have shown more success. (a) rr * it
+ 28 (14 is thekass of the
Failure to come to grips with the "combinatorial explosion" was one of the main crin. (b) rr -
'":
28 is a high Peak:
cisms of AI contained in the Lighthill report (Lighthill, 1973), which formed the basis for ttrr (c) xz 28 is a high Peak;
- is high'
decision by the British government to end support for AI research in all but two universities. (d) At least one of 11 and 12
(Oral tradition paints a somewhat different and more colorful picture, with political ambitions then there is a ketone subgrouP
and personal animosities whose description is beside the point.) .Recognizingthatthemoleculecontainsaparticularsubstructurereducesthenumberofpos-
A third difficulty arose because of some fundamental limitations on the basic strucures tno*ou'ly' DnNonnt- was powerful because
sible candiiao't
being used to generate intelligent behavior. For example, Minsky and Papert's book percep.
to efficient
trons (1969) proved that, although perceptrons (a simple form of neural network) could h componentl,("first principles")
AlltberelevanttheoreticalknowledgetosolvetheseProbiemshasbeenmappedoverfrom
is general form in the tspectruri-frediction 197 I )
shown to learn anything they were capable of representing, they could represent very little.
,p".iu'f"#ri'il"ootUoot ...ip"s"i. in"ig"nbaum et al' '
In particular, a two-input percepffon could not be trained to recognize when its two inpirts
were different. Although their results did not apply to more complex, multilayer networks, ThesignificanceofDENoRALwasthatitwasthefirstsuccessfullatowledge-intensivesys.
research funding for neural-net research soon dwindled lo almost nothing. Ironicatly, the new I clean separadon of
tem:itsexpertisederivedi,o*r*g"n"mbersofspeciat-purposerules.I,atersystemsalso
back-propagation learning algorithms for multilayer networks that'were to cause an enor- in"orpo.u*'Jt'" n'uin rheme
of *:A;;il;';'t"k"';;;;ch-the
component'
of rules) from the reasoning Pro-
mous resurgence in neural-net research in the late 1980s were actually discovered first in. the knowledge (in the form ii*roto began the Heuristic
witn dis l"sson in n"ii"nbaum and others at
1969 @ryson and Ho, 1969). ^no, *""ii" *n"ii'" o"* rnethodologv^of expert
gramming hoject (Hpf),to,""";U";#;" next major effort was
ln
,o o,t l'""' of human expertise' The
ExpF*rsrsra,rs systems could be applied "' guchanan'-JU'' gA*tO Shortliffe developed
Knowledge-based systenrs: The key to power? (1969-1979) ,
diagnosis. Feigenbaum' perform
rhe area of medical MYcrN was able to
urooo int contained two
The picture of problem solving that had arisen during the first decade of AI research
*a, of
i
MycrN to diagnose "itsoJ;h-;;450;;,
*d .;#;;ably
It also
better ttran juJor doctors'
a general-purpose search mechanism trying to string together elementary reasoning
steps t0 '
as well; r"r" experts,pr*o*or. unlike th" DEN;;;tl"''
no general theoretical
rirst, from
find complete solutions. Such approaches have been called weak methods, because, although major differences from had to be acquired
could ue oeouceo' They
IEAK MET}IODS
general, they do not scale up to large or difficult problem instances. The alternative
,
caused some
aorallanguage.AlthoughWinograd,sSHnpt-usystemforunderstandingnaturallanguage
laureate geneticist) teamed up to solve the problem of inferring molecular structure from the t_ on syntactic analysis
its dependence
information provided by a mass spectrometer. The input to the program consists of the ele- had engenJered a good A"uf of work' It was able to
in the early mact'in" o*trution
"*iit"*ent,
o."*"J
mentary formula of the molecule (e.g., c6H13No2) and the mass spectrum giving the masses of the same problems u, was mainly because it
was
t rt2nd nronoun ,"f"r"n""''but this
24
Chapter l. .*I lrc. r.!i.r^rv
I IJ'-' r of Anificial lntelligettce
(lg8Gpresent)
I
designed specifically for one area-the blocks world. Several researchers, includins Erro"" of neural networks
The return
Charniak, a fellow graduate student of Winograd,s at MII, suggesred rhat .oUurt iungiur: of nural-networks in the late I
largely abandoned the field
r,}'^'rsh computer science had (1e82) used terh-
understrnding would require general knowledge about the world and a general rn"tf,oC
fi ^ -*tr. n"rtoli'ertv';;;il ^':n:'111:'o
using that knowledge. li,^" continued
'" "tr'"' I
At.Yale, the linguist-turned-Al-researcher Roger Schank emphasized rhis point, claii. l,o*,ffi il'.ur,"""t'-''"'o""""rf "'r';l*ltm:fru:i"T:ff fil:""""1"1
worrcs' readng
cotlect:*-::1T::"".0 of memory' As we
ing, 'There is no such thing as syntax," whicb upset a lot of linguists, but did serve to stalu dre study of neurafnet mooers
Hinon-c:Iil;;;1r;
Geoff different
useful discussion. schank and his students built a series of programs (Schank and AbelsorL Rumelhan and mid-1980s when. at leasr' four
1977; Wilensky, 1978; Schank and Riesbeck, 1981; Dyel 1983) thar all had &e task of underl dir"u* in Chapter 20' tn: t"''.tiil;;;ne nrrt found in 1969 bv Brvson
and
--.',,os uigorlrnr
the back'propagatton
reinvented ''.al"'1"-:-t:";:-" ;^ science and Psychol-
standing natu'al language. The emphasis, however, was less on language per se and more
6n
the problems of representing and reasoning with the knowledge required for language under- il::fr:ilffi;;; 1'*:::,#:l :ffi:Ti:;iJ:l'J: ::iilJiz)," ;;,,*"' pii,iiu.,ea
dtssemlni
".'-nrrter
MtcTlnl^:::;;;;jinlu",,
l:*tl,:J:,ii,:
crease in the demands for workable knowledge representation schemes. A large number rogicist approach "r i"rr"n." Deaconis book rhe svmbotic
;:n:6;ii;*g;p1611**nf**1'm j'*:'^l;i,T.',:J':lllill:
of different representation and reasoning languages were developed. Some were based on,
logic-for example. the holog language became popular in Europe, and rhe PLANNER fam-
view is'lha'l
:T$Tfi:i:Ji:::il'iJllil:ffi ;;;;;;;;'*"'"1' lu:-TJ"u""n'
ily in the United States. Others, following Minsky's idea of frames (1975), adopred a moft
structured approach, assembling facts about particular object and event types and arranging
the types into a large taxonomic hierarchy analogous to a biological taxonomy. connectionistandsymbolicapproachesarecomplementary,notcompetlng'
agents (1995-present)
The emergence of intelligent
*r**ff*mfiffi
perhaps encouraged by the proSress in solving the subproblems of AI, researchers have also
,,rn"a ,o t*t "t
the "whole agent" problem again. The work of Allen Newell, John Laird,
(Newell,l99o"l-aird et al., 1987) is the best-known example
and paul Rosenbloom on soAR
movement aims to undelstand the
of a complete agent architectufe. The so-called situated
environments with continuous sensory inputs One
workings of agents embedded in rsal
of the most imponant environments for intelligent agents is the Intemet. AI systems have
become so comnon itr web-based applicat,ons that the'1bol'suffix
has entered everyday
Besides the tirst edition of this text (Russell and Norvig, 1995), other recent texts have
also adopted the agenl persPective (Poole er al., 1998; Nilsson, 1998). One consequence of
trying ro build complete agents is the realization that the previously isolated subfields of AI
might need to be reorganized somewhat when their results are to be tied together. In Particular'
*ms$*t,.g.r*gffi it is now widely appreciated that sensory systems (vision, sonar, speech recognition, etc.)
cannot deliver perfectJy reliable information about the environment. Hence, reasoning and
planning systems must be able to handle uncertainty. A second major consequence of the
agent perspective is that AI has been drawn into much closer contact with other fields, sucb
as conuol theorv and economics. that also deal with asents.
[#$:',t*$*.l;$#;.ff..1''t"ffi ,J:rin'.d
What can AI do today? A concise answer is difficult, because there are so many activities in
so many subfields. Here we sample a few applications; others appear tbougbout the book.
Autonomous planning and scheduling: A hundred million miles from Eanh, NASAs
Remote Agent program became the first on-board autonomous planning program to contlol
#ffi
the scheduling of operations for a spacecraft (Jonsson er al., 2000). Remote Agent generated
plans from highJevel goals specified from the ground, and it monitored the operation of the
spacecraft as the plans were executed-ietecting, diagnosing, and recovering from problems
as they occurred.
Game playing: IBM's Deep Blue became the nrst computer program to defeat the
6akt..*:it#;,xffi*,:",*[],fi T*fi rriil,r;:l*e::1",::i world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2-5 in
an exhibition match (Goodman and Kerne,1997). Kasparov said that he felt a "new kind of
intefligence" across the board from him. Newsweek mapazine described the match as'The
*ru'gg*****ng'**Nffii I
brain's last stand." The value of IBM's stock increased Uy $ t S Oitlion.
' Autonomous control: The ALVINN computer visioo svstem was trained to steer a car
to keep it following a lane. It was placed in CMU's NnvLle computer-controlled minivan
and used to navigate across
i
the Uniied States-for 2850 miles ir was in conftol ofsteering the
vehicle 987o of the time. A human
took over the other 27o. mostly at exit ramDs. NAVLAB has
ii video cameras thal transmit
road images to ALVINN, which then compurcs the best direction
to steer, based on experience
{.. lrom previous training runs.
$
Lltaptcf l. tntrof trctir SummarY
Diagnosis: Medical diagno-sis
pfogranls based on probabilistic
to perform at the lever
urun.^-pr.t ff;;;;;;';'r.verar
analysis n"""
areas of medicin
0..;l r Philosophers (going back to 400 B.c.) made AI conceivable by consictering the ideas
that the mind is in some ways like a machine, that it operates on knowledge encoded in
F:r:T:i"'ffi1,1:*:l*i,:*';u#'-r"l"p.,h"i;s;;;;J.lTffr ill:1i1i some internal language, and that thought can be used to choose what actions to take.
for an explanation oi ttre suggest tr" ,rr *.l"*Jlt
diagnosis. rr," ,,,,u.'"1T1tI9** o Mathematicians provided the tools to manipulate statements of logical certainty as well
d.ecision and explains ootn" out the major factors innu.n"i,is'll
the subtre inreraction
ally, the orrlteveral of the symptoms in this as uncertain, probabilistic statements. They also set the groundwork for understanding
r,rn
,r,. program. .^..'u"r.i,l,l
"*p"n "gr""r ,
computation and reasoning about algorithms.
Logistics Franning: nu.ini*,.-eersian
Gurf crisis of r99r, u.s.
forces depr6yga
c Economists formalized the problem of making decisions that maximize the expected
;
^ outcome to the decision-maker.
il::ff '*tiT;r'g:r*f *ri:i*j##*xfli*i,'jr::*:;r";:l.r o Psychologists adopted the idea that humans and animals can be considered information-
illll; I li, ror
Xii".l I
;
],Tl#,'.",f ;l*
t s rarri
ff :;;n g il;;, Jl ;: ;1:? processing machines. Linguists showed that language use fits into this model.
"-lil
senemred in hours that wourd have
,"ken ,";-,llffTr;-J::,ffi::n:f;::; l*,,;.*,
:
o Computer engineers provided the artifacts that make AI applications possible. AI pro-
from the environment. Initially, the mathematical tools of control theory were quite
;xr;;?:'"T1,fl::"#,::::j;ri different from AI, but the fields are coming closer together.
1#;.i??';"1;{',ffi:"il:#T#*il}:ffi
p-.tt"r;s
hip replacem"nt es robotic control to guide
tr," inr"nion oil l' o The history of AI has had cycles of success, misplaced optimism, and resulting cutbacks
I-anguage understanding in enthusiasm and funding. There have also been cycles of introducing new creative
and probrem sorving: pnovsRs
(Littman er al., rggg) approaches and systematically refining the best ones.
sorves crosswo.a pu,,i., isa
::T::,.i1:Tfl:T,t:: bene"r than,;;;h';u",, using consrrainb o AI has advanced more rapidly in the past decade because of greater use of the scientific
inc,uding ai",ionJ", u;:]li::1ffi:::l ltri!:i]n
;:jff:,rix;iJ;;i,,i,,it ,h",;;;,".Ni.. sto.y;;;" sorved .,ErAGE,,
determines
by
:i*;X*#iilTffi :h; r
method in experimenting with and comparing approaches.
Recent progress in understanding the theoretical basis for intelligence has gone hand in
hand with improvements in the capabilities of real systems. The subfields of AI have
recognizes,r,u,,r,.pun'Jld,'lfury*::,JTiHi:,ili**m*ldi,n"J# become more integrated, and AI has found common ground with other disciplines.
program does not know
that Nice is u.ityirrr.-ce,
These are just a few
puzzre.
but it can sorve the :,
exampres of artificiar
interigence sysrems
*;, Bts"ltocnepHrcAL AND HrsroRrcAL Norns
roday. Nor
fr1il.ffi:.",T,|l::h:::*ir'"' '.i*"", "nein.",ing, uniru,r,".n,,i"r,";;,ro which this
.
;
The methodological status of artificial intelligence is investigated in The Sciences of the Ar-
tif.cial, by Herb Simon (1981), which discusses research areas concerned with complex ar-
tifacts. It explains how AI can be viewed as both science and mathematics. Cohen (1995)
gives an overview of experimental methodology within AI. Ford and Hayes (1995) give an
This chapter defines AI opinionated view of the usefulness of the Turing Test.
and establishes the cu
background against which Artificial Intelligence: The Very ldea,by John Haugeland (1985) gives a readable ac-
oped' some
"f
ill;;""ant poinrs ar. us rortolrl,'rful it has dever-
count of the philosophical and practical problems of AI. Cognitive science is well described
. by several recent texts (Johnson-Laird, 1988; Stillings et at.,1995; Thagard, 1996) and by
.?'ffi#T?|r'l'Jilffi:ll$ffT#'il"rwo
_-o vr uv,,svrur j
importanr
suesrions ro ask are: Are you the Encyclopedia of the Cognitive Sciences (Wilson and Keil, 1999). Baker (1989) covers
r-lo you want to model
ideal standard ? humans or work from
an the syntactic part of modern linguistics, and Chierchia and McConnell-Ginet (1990) cover
o In this book' we adopt semandcs. Jurafsky and Martin (2000) cover computational linguistics.
the view that intelligence
action' Ideatv, an. intetigenr is concerned mainry with
rationar Early Ai is described in Feigenbaum and Feldman's Computers and Thought (1963),
wit study rhe probrem uriro;"ilg'ir",nu,
"g"nii"r.", the best prrrib; ;;;;n in a siruation.
we Minsky's Semantic Information Processing ( 1968), andthe Machine Intelligence series edited
"r * inrerigent in rhis sense. by Donald Michie. A large number of influential papers have been anthologizet by Webber