Vous êtes sur la page 1sur 8

About 2 million years ago, a certain kind of mammal used a branch as a tool for

the first time. We now call those mammals human beings. That mammal
discovered that a branch is a kind of arm, but it is more effective for lifting
weight. This discovery was the first great historical event on the road of
mastering our environment. He invented tools, starting with artificial arms
(sticks) and artificial fists (stones). With tools, humans could act much more
efficiently. They began to live a better and easier life. Still, a natural muscle
moved these tools and a natural brain guided them.

The second historical event, which occurred about 200 years ago, we call the
industrial revolution. Movement by an artificial force (steam) replaced
the biological force; the machine was born. The tools, now moved by a
manufactured force, were stronger and faster than those moved by
muscle. Thus, the ability of man to act, in quality and quantity, was
immensely greater than before. The standard of living made an
important advance. Still, a natural brain guided this combination of
artificial arm and artificial force.

Today, humans are taking the third and final step. We are giving our machines
artificial brains. These machines will act for us with no further
guidance, apart from giving them objectives. With only their objectives,
they will do whatever they need to do to reach those objectives. Each
human being will then be able to do as much as hundreds of people can
today. This massive work force should then cause a decrease in the
number of working hours and an immense increase in the quality of life.
Artificial Intelligence or AI is a branch of computer science that
studies the computational requirements for tasks such as perception,
reasoning, and learning, and develops systems to perform those
tasks.

It deals in helping machines to find solutions of complex problems in a more


human-like fashion. Artificial intelligence (AI) is also termed as the use
of computers to model the behavioral aspects of human reasoning and
learning. This generally involves borrowing characteristics from human
intelligence, and applying them as algorithms in a computer friendly
way.

AI is generally associated with Computer Science, but it has many important


links with other fields such as Math, Psychology, Cognition, Biology and
Philosophy, among many others. Our ability to combine knowledge from
all these fields will ultimately benefit our progress in the quest of
creating an intelligent artificial being.
AI is a diverse field whose researchers address a wide range of problems, use a
variety of methods, and pursue a spectrum of scientific goals. For
example, some researchers study the requirements for expert
performance at specialized tasks, while others model commonsense
processes; some researchers explain behaviors in terms of low-level
processes, using models inspired by the computation of the brain, while
others explain them in terms of higher-level psychological constructs
such as plans and goals. Some researchers aim to advance understanding

of human cognition, some to understand the requirements for


intelligence in general (whether in humans or machines), and some to
develop artifacts such as intelligent devices, autonomous agents, and
systems that cooperate with people to amplify human abilities.

Birth of Artificial Intelligence :

Timeline of Artificial Intelligence

Background

• McCulloch and Pitts (1943) theory of neurons as competing circuits


followed up by Hebb’s work on learning

• Work in early 1950’s on game playing by Turing and Shannon;


In 1948 British mathematician Alan Turing developed a chess
algorithm for use with calculating machines—it lost to an amateur player
in the one game that it played.
Ten years later American mathematician Claude Shannon
articulated two chess-playing algorithms: brute force, in which all
possible moves and their consequences are calculated as far into the
future as possible; and selective mode, in which only the most promising
moves and their immediate consequences are evaluated.
• Marvin Minsky’s work on neural networks.

Nomenclature

In 1956 John McCarthy organized a conference at Dartmouth on artificial


intelligence and invited all the figures that worked on this field. Many
mammoths of the field like Minsky, Allen Newell, Herb Simon etc attended
this conference. At that time, McCarthy proposed term ‘ARTIFICIAL
INTELLIGENCE’.

Artificial Intelligence according to Wizards of the field

"As the study of how to build or program computers to enable them to do the
sort of things that minds can do." -- Boden. {Explaining Artificial
Intelligence}

"The science of making machines does things that would required intelligence if
done by men." --Marvin Minsky in 1968

"The overriding goal of AI is to create an artificial system which will equal or


surpass a human being intelligence." -- Weizenbaum in 1976

"The use of computer programs and programming techniques to cast light on


the principle of intelligence in general and human thoughts in particular."
-- Winstone and Boden in 1977

“[The automation of] activities that we associate with human thinking,


activities such as decision-making, problem solving, learning …” --Bellman,
1978

"Intelligent beings are semantic engines -- in other words, automatic formal


systems with interpretations under which they consistently make sense."
-- Haugeland in 1981
AI supporters argue that artificial intelligence is genuine intelligence that
human intelligence is merely the capacity of a naturally evolved organism to
perform various sorts of operations--operations that are no different from those
which might be performed by an artificial system.

On the other hand, intelligence is usually taken to be a feature of mentality, to


be intelligent is to have a mind. To have a mind is to process a lot more
properties than merely those associated with the capacity for intelligent
performances. Just what is involved in having a mind may differ from species
to species .In the case of humans, having a mind seems to involve being able to
have sensuous awareness of objects and of one's own experience and physical
states; being able to experience pain and pleasure, love, hate, fear, anger,
ecstasy, serenity; being able to be creative, inspired, nauseated, ashamed,
bored; being able to tell jokes and find them funny, to play, to be aroused, to
be satisfied, to suffer.”
-- Steve Torrance

A.I. is the capacity of a digital computer or computer-controlled robot device


to perform tasks commonly associated with the higher intellectual processes
characteristic of humans, such as the ability to reason, discover meaning,
generalize, or learn from past experience. The term is also frequently applied
to that branch of computer science concerned with the development of
systems endowed with such capabilities.
-- Herbert A. Simon, Professor of Computer
Science and Psychology, Carnegie
Mellon University

This is a much harder question than you might think. One answer I like a lot,
although it isn't very precise, is "what computers cannot do yet." The problem
with that is it admits a lot of things that aren't AI, so I guess the better answer
is to say that there are three main schools of AI: people trying to model what
humans do (sort of psychology based), people trying to make what people do
easier and better (tools for humans), and people who are trying to build new
tools with "far out" capabilities. I tend to be a little bit in all three of these,
with a preference for the third.
--James Hendler, Defense Advanced Research
Projects Agency; University of Maryland

Some Important Laws and definitions governing Artificial


Intelligence

Turing test
The Turing test is a proposal for a test of a machine's capability to perform
human-like conversation. Described by Alan Turing in the 1950 paper
"Computing machinery and intelligence", it proceeds as follows: a human
judge engages in a natural language conversation with two other parties,
one a human and the other a machine; if the judge cannot reliably tell
which is which, then the machine is said to pass the test. It is assumed
that both the human and the machine try to appear human. In order to
keep the test setting simple and universal (to explicitly test the
linguistic capability of the machine instead of its ability to render words
into audio), the conversation is usually limited to a text-only channel
such as a Teletype machine as Turing suggested or, more recently, IRC
or IM.

Moore's law
Moore's law is the empirical observation that at our rate of technological
development, the complexities of an integrated circuit, with respect to
minimum component cost will double every 18 months. It is attributed to
Gordon E. Moore, a co-founder of Intel Corporation.

Intelligent Agent

In computer science, an Intelligent Agent (IA) refers to a software agent that


exhibits some form of artificial intelligence while the working of
software agents used for operator assistance or data mining.

Neural Net

Neural Nets emulate elements of the human cognitive process, especially the
ability to recognize and learn patterns. The architecture consists of a
large number of nodes that serve as calculators to process inputs and
pass the results to other nodes in the network. These systems have the
advantage of not requiring prior assumptions about possible
relationships. One application of neural nets might be forecasting
employee turnover by category based on such factors as tenure with the
firm, managerial level, and gender.

Partition in AI
In the very beginning of the concept AI science community was split apart in
two parts. These came to be known as NEATS and SCRUFFIES. The difference
in algorithm of achieving AI is the major cause of disagreement in two sects.
Both sections are fierce opponents and simply discard viability of other sect.
No truce has been till now achieved among both and this is greatly damaging
the research work in the field.

The Neat Position The Scruffy Position


Theoretical knowledge is super Theoretical knowledge is subordinate to
ordinate to practice. practice.

The conditions of practice are The conditions of practice are


reliability, predictability and uncertainty, instability, complexity and
stability. value conflicts.

Theory and research are directly Professional practice is characterized by


and linearly linked to professional reflection; action and reflection episodes
practice the former drives the theory and research comprise only one
latter and thus knowledge is source of knowledge that is to inform but
super ordinate to the professional not to prescribe practice.
and designed to prescribe
practice.
Professionals bring to their Professionals seek to maximize certain
practice a set of standardized (often competing) values within a highly
skills linked to a series of dynamic context, with costs and benefits
scientifically verified standard of pattern emphases changing from
practice treatments. moment to moment.
The professional then searches
the context in which he or she
works, carefully diagnosing and
characterizing contingencies and
situations according to
predetermined and standardized
protocols.

The Neat Position The Scruffy Position


Scientific truth and scientific The task is not to pigeonhole discrete
theory can be applied directly to outcomes and apply standard practice
problems of professional practice treatments, but to 'ride the wave' of the
the aim is to establish the one pattern as it unfolds. Theoretical
best analysis of a problem, the knowledge is not used to prescribe but to
one best way to practice, given inform intuition and to enhance
existing scientific knowledge. professional judgments.

Murphy and Lentell (1993)

This conflict tangles together two separate issues. One is the


relationship between human reasoning and AI; "neats" tend to try to build
systems that "reason" in some way identifiably similar to the way humans
report themselves as doing, while "scruffies" profess not to care whether an
algorithm resembles human reasoning in the least, as long as it works.

More importantly, neats tend to believe that logic is king, while scruffies
favor looser, more ad-hoc methods driven by empirical knowledge. To a neat,
scruffy methods appear promiscuous, successful only by accident and not
productive of insights about how intelligence actually works; to a scruffy, neat
methods appear to be hung up on formalism and irrelevant to the hard-to-
capture "common sense" of living intelligences.

Throughout the 1960s and 1970s, scruffy approaches were pushed to the
background, but interest was regained in the 1980s when the limitations of the
"neat" approaches of the time became clearer. However, it has become clear
that contemporary methods using both broad approaches have severe
limitations.

Vous aimerez peut-être aussi