Académique Documents
Professionnel Documents
Culture Documents
In this essay, I am going to discuss four basic questions about Artificial Intel
ligence (AI):
-What is AI?
-Can AI be built?
-How to build AI?
-Should AI be built?
Every AI researcher probably has answers to these questions, and so do many peop
le interested in AI. These questions are basic, since the answers to them are ofte
n dependent on by the answers to many other questions in AI.
In the following, I will briefly discuss each of the questions in a casual and i
nformal manner, without assuming much prior knowledge of the readers. Many hyper
links are embedded to provide more detailed information, usually using on-line m
aterials. This writing makes no attempt to provide a comprehensive survey of the
related topics, but only proposes my analysis and answers, which are usually co
mpared with a few other representative answers.
These four questions will be discussed in the given order, because the answer to
the What question strongly influences the answers of the other questions, so must
be addressed first. After that, if the answer to the Can question is negative, it
makes little sense to talk about the How and Should questions. Finally, if nobody k
nows how to achieve this goal, we do not need to worry about whether it is a goo
d idea to actually create AI.
The aim of AI
Though it is not unusual for the researchers in a research field to explore in d
ifferent directions, the situation of AI is still special in the extent of the d
iversity among the research goals in the field.
The electronic digital computer was invented in the 1940s mainly to carry out nu
merical computations, which used to be a unique capability of the human mind. Ve
ry soon, some researchers realized that with proper coding, many other intellect
ual capabilities could be implemented in computers, too. Then, it was very natur
al for them ask whether a computer can have all the capabilities of the human mi
nd, which is usually called intelligence, thinking, or cognition. Among the pioneering
thinkers, there were Alan Turing, Norbert Weiner, and John Von Neumann.
The field of AI was founded by a group of researchers in the 1950s, including Jo
hn McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. To them, the artific
ial intelligence problem is taken to be that of making a machine behave in ways
that would be called intelligent if a human were so behaving.
In a general and rough sense, everyone agrees that AI means to build computer syste
ms that are similar to the human mind. However, since it is obviously impossible
for a computer to be identical to a human in all aspects, any further specificat
ion of AI must focus on a certain aspect of the human mind as essential, and tre
at the other aspects as irrelevant, incidental, or secondary.
On where AI should be similar to human intelligence, the different opinions can
be divided into five groups:
Structure: Since human intelligence is produced by the human brain, AI shoul
d aim at brain modeling. Example: HTM
Behavior: Since human intelligence is displayed in human behavior, AI should
aim at behavior simulation. Example: Turing Test
Capability: Since intelligent people are the ones that can solve hard proble
Though this argument sounds straightforward, its two premises are actually on di
fferent levels of description when talking about a system. Properties like origi
nality, creativity, and autonomy are all about the systems high-level behaviors.
When saying that they are not programmed in an intelligent system, what it means i
s that for any concrete stimulus or problem, the systems response or solution is not
y predetermined by the systems initial design, but also depends on the systems his
tory and the current context. On the other hand, when saying that computers actio
ns are programmed, it is about the low-level activities of the system. The fact th
at the systems low-level activities are predetermined by various programs does no
t necessarily mean that its behaviors when solving a problem always follow a fix
ed procedure.
Intuitively speaking, a computers activities are controlled by a number of progra
ms that are designed by human programmers. When the system faces a problem, ther
e are two possibilities. In a conventional computer system, there is a single pr
ogram responsible for this type of problem, so it will be invoked to solve this
problem. When the system works in this mode, there is indeed no creativity or au
tonomy. It is fair to say that there is no intelligence involved, and that the s
ystems behavior is mechanical, habitual, or routine. However, it is not the only
possible way for a computer to work. In principle, the system can also solve a p
roblem by the cooperation of many programs, and let the selection of the program
s and the form of the cooperation be determined by multiple factors that are con
stantly changing. When the system works in this mode, the solving process of a p
roblem does not follow any program, and the same problem may get different solut
ions at different times, even though the system uses nothing but programs to sol
ve it, with no magical or pure random force involved.
It can be argued that this picture is not that different from how the human mind
works. When facing novel problems, we usually have no predetermined program to
follow. Instead, we try to solve them in a case by case manner, using all the av
ailable knowledge and resources for each case. On the other hand, in each step o
f the process, what the mind does is neither arbitrary nor magical, but highly r
egular and habitual. A more detailed discussion can be found in my paper on Caseby-case problem solving.
Some other arguments target at certain definitions of AI. For example, since a c
omputer will neither have a human body nor human experience, it is unlikely to p
erfectly simulate human behaviors. Therefore, a computer may never be able to pa
ss the Turing Test. Personally, I agree with this argument, but since I do not s
ubscribe to a behavioral definition of AI, the argument has little to do with wh
ether AI can be achieved, by my definition. Similarly, I do not believe a comput
er can provide perfect solution to every problem. However, since this is not wha
t AI means to me, I do not think AI is impossible for this reason.
In summary, though human-comparable thinking machines have not been built yet, n
one of the proposed arguments on its impossibility has been established.
The path toward AI
If it is possible to build AI, what is the most plausible way to do so? In terms
of overall methodology, there are three major approaches:
Hybrid: To start with the existing AI techniques, and then to connect them i
nto a complete system. Example: (AA)AI: More than the Sum of Its Parts by Ronald J
. Brachman
Integrated: To start with an architecture, and then to insert modules built
with various techniques. Example: Cognitive Synergy: A Universal Principle for Ge
neral Intelligence? by Ben Goertzel
Unified: To start with a core system built with a single technique, and then
to add optional tools built with auxiliary techniques. Example: Toward a Unified
Artificial Intelligence by Pei Wang
The selection of methodology depends on the research goal. Since I see intellige
nce as a general principle, it is natural for me to take the unified approach, b
y developing a technique in which the principle of intelligence is most easily f
ormalized and implemented. Compared to the other two, the unified approach is re
latively more simple and more coherent.
The ethics of AI is a topic that has raised many debates, both among researchers
in the field and among the general public. Since many people see intelligence as
what makes humans the dominate species in the world, they worry AI will take tha
t position, and the success of AI will actually lead to a disaster for humans.
This concern is understandable. Though the advances in science and technology ha
ve solved many problems for us, they also create various new problems, and somet
imes it is hard to say whether a specific theory or technique is beneficial or h
armful. Given the potential impact of AI on human society, we, the AI researcher
s, have the responsibility of carefully anticipating the social consequences of
their research results, and doing our best to bring the benefits of the techniqu
e, while preventing the harm from it.
According to my theory of intelligence, the behaviors of an intelligent system a
re determined both by its nature (design) and nurture (experience). The systems i
ntelligence mainly comes from its design, and is morally neutral. In other words
, the systems degree of intelligence has nothing to do with whether the system is
considered as beneficial or harmful, either by a single human or by the whole h
uman species. This is because the intelligence mechanism is independent of the c
ontent of the systems goals and beliefs, which are determined mainly by the syste
ms experience.
Therefore, to control the morality of an intelligence means to control its exper
ience, that is, to educate the system. We cannot design a human-friendly AI, but h
ave to teach an AI to be human-friendly, by using carefully chosen materials to
shape its goals and beliefs. Initially, we can load its memory with certain cont
ent, in the spirit of Asimovs Three Laws of Robotics, as well as many more detail
ed requirements and regulations, though we cannot expect them to resolve all the
moral issues.
Here the difficulty comes from the fact that for a sufficiently complicated inte
lligent system, it is practically impossible to fully control its experience. Or
, put it in another way, if a systems experience can be fully controlled, its beh
avior will be fully predictable, however, such a system cannot be fully intellig
ent.
Due to insufficient knowledge and resources, the derived goals of an intelligent
system are not always consistent with their origins. Similarly, the system cann
ot fully anticipate all consequences of its actions, so even if its goal is beni
gn, the actual consequence may still turn out to be harmful, to the surprise of
the system itself.
As a result, the ethical and moral status of AI is basically the same as most ot
her science and technology neither beneficial in a foolproof manner, nor inevita
bly harmful. The situation is similar to what every parent has learned: a friend
ly child is usually the product of education, not bioengineering, though this edu
cation is not a one-time effort, and one should always be prepared for unexpected
events. AI researchers have to always keep the ethical issues in mind, and make
the best selections at each design stage, without expecting to settle the issue
once for all, or to cut off the research all together just because it may go wr
ong that is not how an intelligent species deals with uncertain situations.
The answers
Here is a summary of my answers to the four basic questions about AI:
What is AI?
AI is computer systems that can adapt to the environment and work with insuf
ficient knowledge and resources.
Can AI be built?
Yes, since the above definition does not require anything impossible. The pr