Vous êtes sur la page 1sur 6

Four Basic Questions about Artificial Intelligence Pei Wang

In this essay, I am going to discuss four basic questions about Artificial Intel
ligence (AI):
-What is AI?
-Can AI be built?
-How to build AI?
-Should AI be built?
Every AI researcher probably has answers to these questions, and so do many peop
le interested in AI. These questions are basic, since the answers to them are ofte
n dependent on by the answers to many other questions in AI.
In the following, I will briefly discuss each of the questions in a casual and i
nformal manner, without assuming much prior knowledge of the readers. Many hyper
links are embedded to provide more detailed information, usually using on-line m
aterials. This writing makes no attempt to provide a comprehensive survey of the
related topics, but only proposes my analysis and answers, which are usually co
mpared with a few other representative answers.
These four questions will be discussed in the given order, because the answer to
the What question strongly influences the answers of the other questions, so must
be addressed first. After that, if the answer to the Can question is negative, it
makes little sense to talk about the How and Should questions. Finally, if nobody k
nows how to achieve this goal, we do not need to worry about whether it is a goo
d idea to actually create AI.
The aim of AI
Though it is not unusual for the researchers in a research field to explore in d
ifferent directions, the situation of AI is still special in the extent of the d
iversity among the research goals in the field.
The electronic digital computer was invented in the 1940s mainly to carry out nu
merical computations, which used to be a unique capability of the human mind. Ve
ry soon, some researchers realized that with proper coding, many other intellect
ual capabilities could be implemented in computers, too. Then, it was very natur
al for them ask whether a computer can have all the capabilities of the human mi
nd, which is usually called intelligence, thinking, or cognition. Among the pioneering
thinkers, there were Alan Turing, Norbert Weiner, and John Von Neumann.
The field of AI was founded by a group of researchers in the 1950s, including Jo
hn McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. To them, the artific
ial intelligence problem is taken to be that of making a machine behave in ways
that would be called intelligent if a human were so behaving.
In a general and rough sense, everyone agrees that AI means to build computer syste
ms that are similar to the human mind. However, since it is obviously impossible
for a computer to be identical to a human in all aspects, any further specificat
ion of AI must focus on a certain aspect of the human mind as essential, and tre
at the other aspects as irrelevant, incidental, or secondary.
On where AI should be similar to human intelligence, the different opinions can
be divided into five groups:
Structure: Since human intelligence is produced by the human brain, AI shoul
d aim at brain modeling. Example: HTM
Behavior: Since human intelligence is displayed in human behavior, AI should
aim at behavior simulation. Example: Turing Test
Capability: Since intelligent people are the ones that can solve hard proble

ms, AI should aim at expert systems in various domains of application. Example:


Deep Blue
Function: Since intelligence depends on cognitive functions like reasoning,
learning, problem solving, and so on, AI should aim at cognitive architectures w
ith these functions. Example: Soar
Principle: Since the human mind follows principles that are not followed by
conventional computers, AI should aim at the specification and implementation of
these principles. Example: NARS
In a paper on the definitions of AI, I argued that all the above opinions on int
elligence lead to valuable research results, but they do not subsume each other.
I choose the principle approach, because I believe that it is at the essence of w
hat intelligence means in this context, and it will give AI a coherent and unique
identity that distinguishes it from the well-established branches of science, su
ch as cognitive psychology and computer science.
Even among the researchers who see intelligence as certain principles of rationa
lity or optimization, there are still different positions on its condition or co
nstraint. To me intelligence refers to the ability of adapting to the environment
while working with insufficient knowledge and resources, which means that an int
elligent system should rely on finite processing capacity, work in real time, op
en to unexpected tasks, and learn from experience.
According to this opinion, an AI system will be similar to a human being in how
its behaviors are related to its experience. Since an AI will not have a human b
ody, nor live in a human environment, its strcture, behavior, and capability wil
l still be different from those of a human being. The system will have cognitive
functions like a human, though these functions are different aspects of the sam
e internal process, so often cannot be clearly separated from each other.
Even though such an AI can be built using existing hardware and software provide
d by the computer industry, the design of the system will not follow the current
theories in computer science, since they typically assume some form of sufficie
ncy of knowledge and resources, with respect to the problems to be solved. For t
his reason, AI is not merely a novel application of computer science.
The possibility of AI
As soon as the idea of AI was raised, the debate on its possibility started. In
his historical article Computing Machinery and Intelligence, Turing advocated for
the possibility of thinking machines, then discussed several objections against
it. Even though his arguments are persuasive, they did not settle the debate onc
e and for all. Most of the objections he criticized are still alive in the conte
mporary discussions, though often in different forms.
In the article Three Fundamental Misconceptions of Artificial Intelligence, I anal
yzed a few wide-spread misconceptions about AI, which are often explicitly or im
plicitly assumed by various proofs of the impossibility of AI, that is, there is s
omething that the human mind can do, but computers cannot. One of them is a vari
ant of what Turing called Lady Lovelaces Objection, and nowadays it often goes like
this: Intelligence requires originality, creativity, and autonomy, but computers
can only act according to predetermined programs, therefore computers cannot ha
ve intelligence. One version of this argument can be found in the book What Comput
ers Still Cant Do by Hubert Dreyfus.

Though this argument sounds straightforward, its two premises are actually on di
fferent levels of description when talking about a system. Properties like origi
nality, creativity, and autonomy are all about the systems high-level behaviors.
When saying that they are not programmed in an intelligent system, what it means i
s that for any concrete stimulus or problem, the systems response or solution is not
y predetermined by the systems initial design, but also depends on the systems his

tory and the current context. On the other hand, when saying that computers actio
ns are programmed, it is about the low-level activities of the system. The fact th
at the systems low-level activities are predetermined by various programs does no
t necessarily mean that its behaviors when solving a problem always follow a fix
ed procedure.
Intuitively speaking, a computers activities are controlled by a number of progra
ms that are designed by human programmers. When the system faces a problem, ther
e are two possibilities. In a conventional computer system, there is a single pr
ogram responsible for this type of problem, so it will be invoked to solve this
problem. When the system works in this mode, there is indeed no creativity or au
tonomy. It is fair to say that there is no intelligence involved, and that the s
ystems behavior is mechanical, habitual, or routine. However, it is not the only
possible way for a computer to work. In principle, the system can also solve a p
roblem by the cooperation of many programs, and let the selection of the program
s and the form of the cooperation be determined by multiple factors that are con
stantly changing. When the system works in this mode, the solving process of a p
roblem does not follow any program, and the same problem may get different solut
ions at different times, even though the system uses nothing but programs to sol
ve it, with no magical or pure random force involved.
It can be argued that this picture is not that different from how the human mind
works. When facing novel problems, we usually have no predetermined program to
follow. Instead, we try to solve them in a case by case manner, using all the av
ailable knowledge and resources for each case. On the other hand, in each step o
f the process, what the mind does is neither arbitrary nor magical, but highly r
egular and habitual. A more detailed discussion can be found in my paper on Caseby-case problem solving.
Some other arguments target at certain definitions of AI. For example, since a c
omputer will neither have a human body nor human experience, it is unlikely to p
erfectly simulate human behaviors. Therefore, a computer may never be able to pa
ss the Turing Test. Personally, I agree with this argument, but since I do not s
ubscribe to a behavioral definition of AI, the argument has little to do with wh
ether AI can be achieved, by my definition. Similarly, I do not believe a comput
er can provide perfect solution to every problem. However, since this is not wha
t AI means to me, I do not think AI is impossible for this reason.
In summary, though human-comparable thinking machines have not been built yet, n
one of the proposed arguments on its impossibility has been established.
The path toward AI
If it is possible to build AI, what is the most plausible way to do so? In terms
of overall methodology, there are three major approaches:
Hybrid: To start with the existing AI techniques, and then to connect them i
nto a complete system. Example: (AA)AI: More than the Sum of Its Parts by Ronald J
. Brachman
Integrated: To start with an architecture, and then to insert modules built
with various techniques. Example: Cognitive Synergy: A Universal Principle for Ge
neral Intelligence? by Ben Goertzel
Unified: To start with a core system built with a single technique, and then
to add optional tools built with auxiliary techniques. Example: Toward a Unified
Artificial Intelligence by Pei Wang
The selection of methodology depends on the research goal. Since I see intellige
nce as a general principle, it is natural for me to take the unified approach, b
y developing a technique in which the principle of intelligence is most easily f
ormalized and implemented. Compared to the other two, the unified approach is re
latively more simple and more coherent.

Formalization turns an informal theory into a formal model. When describing a sy


stem, there are three most commonly used frameworks of formalization:
Dynamical system: The systems state is represented as a point in a multi-dime
nsional space, and its activity is represented as a trajectory in the space, fol
lowing differential equations.
Computational system: The systems state is represented as data, and its activ
ity is represented as the execution of programs that process the data, following
algorithms.
Inferential system: The systems state is represented as a group of sentences,
and its activity is represented as the derivation of new sentences, following i
nference rules.
Though in principle these three frameworks have equivalent expressing and proces
sing power, for a concrete problem they may have very different easiness and nat
uralness. For (general-purpose) intelligent systems, I prefer the framework of a
n inferential system (also known as reasoning system), mainly because of its follo
wing advantages:
Domain independence: The design of a reasoning system mainly consists of the
specifications of its formal language, semantics, inference rules, memory struc
ture, and control mechanism. All of these components should be independent of th
e application domain. The domain specificity of the system comes from the conten
t of the memory. In this way, the nature versus nurture distinction is clearly mad
e.
Step justifiability: Each inference step of the system must follow a predete
rmined inference rule, which is justified according to the semantics, so as to r
ealize the principle of intelligence. Therefore, in every step, the system is in
deed making its best choice, under the restriction of available knowledge and re
sources.
Process flexibility: Even though the individual steps are carried out by giv
en programs, how these steps are linked together is not predetermined by the des
igner for each case. Instead, it is determined at run time by the system itself,
according to several factors that depend on the systems past experience and curr
ent context.
The major difficulty in building an intelligent reasoning system is that the stu
dy of reasoning systems is dominated by mathematical logic, where the systems ar
e designed to be axiomatic, meaning that it starts with a set of axioms that are
supposed to be true, then uses truth-preserving rules to derive true conclusion
s, without considering the resource expense. Under the assumption that intellige
nce means to adapt with insufficient knowledge and resources, an intelligent sys
tem has to be non-axiomatic, in the sense that its beliefs are not axioms and th
eorems, but summaries of the systems experience, so every belief is fallible and
revisible. Also, the inference rules are no longer truth-preserving in the traditi
onal sense, but in the sense that their conclusions properly summarize the evide
nce provided by the premises. Furthermore, due to insufficient resources, the sy
stem usually cannot consider all relevant beliefs when solving a problem, but ha
s to only use the available resources to consider the most relevant and importan
t beliefs. Consequently, my system NARS (Non-Axiomatic Reasoning System) is very
different from traditional reasoning systems, but is more similar to the human
mind.
NARS is an on-going project. For the up-to-date technical descriptions of the sy
stem, as well as demonstrations with working examples, visit the project website
.
The ethics of AI
Even if in principle we know how to build AI, should we really try it?

The ethics of AI is a topic that has raised many debates, both among researchers
in the field and among the general public. Since many people see intelligence as
what makes humans the dominate species in the world, they worry AI will take tha
t position, and the success of AI will actually lead to a disaster for humans.
This concern is understandable. Though the advances in science and technology ha
ve solved many problems for us, they also create various new problems, and somet
imes it is hard to say whether a specific theory or technique is beneficial or h
armful. Given the potential impact of AI on human society, we, the AI researcher
s, have the responsibility of carefully anticipating the social consequences of
their research results, and doing our best to bring the benefits of the techniqu
e, while preventing the harm from it.
According to my theory of intelligence, the behaviors of an intelligent system a
re determined both by its nature (design) and nurture (experience). The systems i
ntelligence mainly comes from its design, and is morally neutral. In other words
, the systems degree of intelligence has nothing to do with whether the system is
considered as beneficial or harmful, either by a single human or by the whole h
uman species. This is because the intelligence mechanism is independent of the c
ontent of the systems goals and beliefs, which are determined mainly by the syste
ms experience.
Therefore, to control the morality of an intelligence means to control its exper
ience, that is, to educate the system. We cannot design a human-friendly AI, but h
ave to teach an AI to be human-friendly, by using carefully chosen materials to
shape its goals and beliefs. Initially, we can load its memory with certain cont
ent, in the spirit of Asimovs Three Laws of Robotics, as well as many more detail
ed requirements and regulations, though we cannot expect them to resolve all the
moral issues.
Here the difficulty comes from the fact that for a sufficiently complicated inte
lligent system, it is practically impossible to fully control its experience. Or
, put it in another way, if a systems experience can be fully controlled, its beh
avior will be fully predictable, however, such a system cannot be fully intellig
ent.
Due to insufficient knowledge and resources, the derived goals of an intelligent
system are not always consistent with their origins. Similarly, the system cann
ot fully anticipate all consequences of its actions, so even if its goal is beni
gn, the actual consequence may still turn out to be harmful, to the surprise of
the system itself.
As a result, the ethical and moral status of AI is basically the same as most ot
her science and technology neither beneficial in a foolproof manner, nor inevita
bly harmful. The situation is similar to what every parent has learned: a friend
ly child is usually the product of education, not bioengineering, though this edu
cation is not a one-time effort, and one should always be prepared for unexpected
events. AI researchers have to always keep the ethical issues in mind, and make
the best selections at each design stage, without expecting to settle the issue
once for all, or to cut off the research all together just because it may go wr
ong that is not how an intelligent species deals with uncertain situations.
The answers
Here is a summary of my answers to the four basic questions about AI:
What is AI?
AI is computer systems that can adapt to the environment and work with insuf
ficient knowledge and resources.
Can AI be built?
Yes, since the above definition does not require anything impossible. The pr

evious failures are mainly due to misconceptions.


How to build AI?
The most likely way is to design a reasoning system in a non-axiomatic manne
r, in which validity means adaptation under knowledge-resource restriction.
Should AI be built?
Yes, since AI has great potential in many applications, though we need to be
careful all the time to avoid its misuse or abuse.
To show the similarity and difference between my opinion and those of other AI r
esearchers, in the following I select some representative answers to similar que
stions, also in non-technical language:
What is Artificial Intelligence? by John McCarthy
Introduction of Artificial Intelligence: A Modern Approach by Stuart Russell
and Peter Norvig
Universal Intelligence: A Definition of Machine Intelligence by Shane Legg a
nd Marcus Hutter
Essentials of General Intelligence by Peter Voss

Vous aimerez peut-être aussi