Vous êtes sur la page 1sur 13

Artificial Intelligence

Artificial Intelligence
Seminarski rad za engleski Jezik Informacijski sistemi

Predmetni nastavnici: Doc.dr. Edina pago-Dumurija As. Iris Memid

Student: Jasmina Osivid 2297

Artificial Intelligence
Contents:

Contents
1. Introduction ......................................................................................................................................... 3 2. History ................................................................................................................................................. 4 3.Schools of thougth: .............................................................................................................................. 5 4.Main areas of Artificial intelligence...................................................................................................... 6 4.1 Expert systems: ............................................................................................................................. 6 4.2. Robotics: ....................................................................................................................................... 7 Components of robotic systems:..................................................................................................... 7 First use of the word Robotics: .................................................................................................... 7 4.3. Natural language processing (NLP): ............................................................................................. 8 4.4. Artificial intelligence (video games): ............................................................................................ 9 4.5. Neural networks: ........................................................................................................................ 10 5. Conclusion: ........................................................................................................................................ 12 6. Literature: .......................................................................................................................................... 13

Artificial Intelligence
1. Introduction

In the next couple of pages, we will see a shorten overview of Artificial Intelligence and description. The term Artificial intelligence (AI) was coined by prof. John McCarthy 1956 at the Massachusetts Institute of Technology, at an interdisciplinary conference which we know as The Dartmouth Conference. The Dartmouth conference was driven by the thought that if the right group of scientists got together and cooprated, computing machines could be developed that would stimulate intelligence. However, the researchers could not reach agreements as to how to proceed, and split intp several groups going in different directions. Artificial Intelligence are now in routine use on economics, medicine, engeenering and the military. As well as being built into many common home computer software, traditional strategy game like computer chess and other video games. Artificial Intelligence (AI) is defined as science and engeenering of making intelligent machines with broader aspect of facilities and leisure and wide communication qualities. The goals of thise research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. Main areas of AI: Robotics Natural language Expert systems Games playing Neural networks

Artificial Intelligence
2. History
The concept of intelligent machines and the intellectual roots of Artificial Intelligence, may be found in greek mythology .

Talos - was a giant man of bronze, created by Haphestus,who protected Europa in


Crete from pirates and invaders by circling the island's shores three times daily while guarding it. Pygmalion's Galatea was statue carved of ivory by Pygmalion of Cyprus, which came to life The Tales of the Golem in Jewish folklore, it is an artificial creature created by magic to serve its creator.

After modern computers become available following Word War II, it has become possible to create program that perform difficult intellectual tasks.

Important timelins for Artificial intelligence : 1915 Leonardo Torres build chess automation 1941 Konrad Zeus build first program controlled computer 1945 Evolution of Game theory in AI 1950 Alan Turing proposes the Turing test 1956 AI introduces wordwide from Dartmouth Conference 1957 - GPS demonstrated by Newell, Shaw and Simon 1958 John McCarty invested LISP programming language 1965 Edward Feigenbaum initiated Dendral 1993 Polly(a behavior based robot) by Ian Horswill 2007 Blue Brain, a project to simulate brain at molecular level

Artificial Intelligence

3.Schools of thougth:
Artificial intelligence dicides roughtly into two schools od thought: 1. Conventional AI 2. Computational Intelligence (CI), sometimes reffered to as Synthetic Intelligence. Conventional AI: Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statical analysis. This is known as symbolic AI, logical AI or neat AI. Methods include: Expert systems can process large amount of known information and provide conclusions based on them. Case-based reasoning in the process of solving new problem based on the solutions od similar past problems. Bayesian network represents a set of variables together with a joint probability distribution with explicit indepedence assumptions. Behavior-based AI a modular method of building AI systems by hand. Computational intelligence (CI): Computational intelligence involves iterative development of learning. Learning is based on empirical data. It is know as non-symbolic AI, scruffy AI and soft computing. Methods include: Neural networks: systems with very strong pattern recognition capabilities. Fuzzy systems: techiques for reasoning under uncertainty, havewidely used in modern industrial and consumer product control systems. Evolutionary computation: applies biologically inspired concepts such as populations, mutation, and survival of fittest to generate increasingly better solutions to the problem. Hybrid intelligence systems attempt to combine these two groups. It is thought that the human brain uses multiple techiques to both formulate and cross-check result. Systems interations is seen as promising and perhaps necessery for true AI.

Artificial Intelligence

4.Main areas of Artificial intelligence


The general problem of creating intelligence has been broken down into a number od specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display.

4.1 Expert systems:


Expert system is an artificial intelligence program that has expert-level knowladge about a particular domain and knows to use it knowladge to respond properly. Domain refers to the area within which the task is being performed. Ideally the expert systems should substitute a human. Edward Feigenbaum of Standford University has defined expert system as an intelligent computer program that uses knowledge and inference procedures to solve problems that are difficult enought to require significant human expertise for their solutions. Expert systems are introduced by researchers in the Stanford Heuristic Programming Project. It is branch of AI designed to work within a particular domain. The source of knowledge may come from a human expert and/or from books, magazines and internet. As knowledge play a key role in the functioning, they are also known as knowledge-based system. Characteristics of expert systems: High performance (they should perform at the level of a human expert) Adequate response time (they should have the ability to respond in a resonable time) Realiability (they must be reliable and should not crash) Understandable (they should not be a black box, insted it should be able explain the steps of reasoning process) Components of expert systems:

The expert system consists of two major components: knowledge base and inference engine.

Artificial Intelligence
Knowledge base contains the domain knowledge which is used by inference engine to draw conclusions. The inference engine is the generic control mechanism that applies ehe axiomatic knowledge to the task-specfic data to arrive at some conclusion.

4.2. Robotics:
Robotics is the engineering science and technology of robots, and their design, manufacture, application, and structural disposition. Robotics and AI are often seen as two different fields entirely, robotics being a mechanical engineering field and AI being computer science related. According to the Robot Institute of America (1979) a robot is: "A reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks". Some modern robots also have the ability to learn in a limited capacity. Learning robots recognize if a certain action (moving its legs in a certain way, for instance) achieved a desired result (navigating an obstacle). The robot stores this information and attempts the successful action the next time it encounters the same situation. Again, modern computers can only do this in very limited situations. They can't absorb any sort of information like a human can. Some robots can learn by mimicking human actions.
Components of robotic systems:

This figure depicts the components that are part of all robotic systems. The purpose of this Section is to describe the semantics of the terminology used to classify the chapters in the WEBook: sensing, planning, modelling, control, etc. The real robot is some mechanical device (mechanism) that moves around in the environment, and, in doing so, physically interacts with this environment. This interaction involves the exchange of physical energy, in some form or another. Both the robot mechanism and the environment can be the cause of the physical interaction through Actuation, or experience the effect of the interaction, which can be measured through Sensing.
First use of the word Robotics:

The acclaimed Czech playwright Karel Capek (1890-1938) made the first use of the word robot, from the Czech word for forced labor or serf. The use of the word Robot was intoduced into his play R.U.R ( Rossum's Universal Robots). The word 'robotics' was first used in Runaround, a short story published in 1942, by Isaac Asimov. I, Robot was a collection of several of these stories, was published in 1950. One of the first robots Asimov wrote about was a robotherapist. A modern counterpart to Asimov's fictional character Eliza. Eliza was born in 1966 by a Massachusetts Institute of

Artificial Intelligence
Technology Professor Joseph Weizenbaum who wrote Eliza -- a computer program for the study of natural language communication between man and machine. She was initially programmed with 240 lines of code to simulate a psychotherapist by answering questions with questions.

4.3. Natural language processing (NLP):


Natural Language Processing is a branch of artificial intelligence that deals with analyzing, understanding and generating the languages that humans use naturally in order to interface with computers in both written and spoken contexts using natural human languages instead of computer languages. One of the challenges inherent in natural language processing is teaching computers to understand the way humans learn and use language. Take, for example, the sentence "Baby swallows fly." This simple sentence has multiple meanings, depending on whether the word "swallows" or the word "fly" is used as the verb, which also determines whether "baby" is used as a noun or an adjective. In the course of human communication, the meaning of the sentence depends on both the context in which it was communicated and each person is understanding of the ambiguity in human languages. This sentence poses problems for software that must first be programmed to understand context and linguistic structures. Modern NLP algorithms are based on machine learning, especially statistical machine learning. The paradigm of machine learning is different from that of most prior attempts at language processing. Prior implementations of language-processing tasks typically involved the direct hand coding of large sets of rules. The machine-learning paradigm calls instead for using general learning algorithms - often, although not always, grounded in statistical inference - to automatically learn such rules through the analysis of large corpora of typical real-world examples. Acorpus (plural, "corpora") is a set of documents (or sometimes, individual sentences) that have been hand-annotated with the correct values to be learned. Systems based on machine-learning algorithms have many advantages over hand-produced rules:
The learning procedures used during machine learning automatically focus on the most common cases, whereas when writing rules by hand it is often not obvious at all where the effort should be directed. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to unfamiliar input ( containing words or structures that have not been seen before) and to erroneous input ( with misspelled words or words accidentally omitted). Generally, handling such input gracefully with hand-written rules - or more generally, creating systems of hand-written rules that make soft decisions - is extremely difficult, error-prone and time-consuming. Systems based on automatically learning the rules can be made more accurate simply by supplying more input data. However, systems based on hand-written rules can only be made more accurate by increasing the complexity of the rules, which is a much more difficult task. In particular, there is a limit to the complexity of systems based on hand-crafted rules, beyond which the systems become more and more unmanageable. However, creating more data to input to machine-learning systems simply requires a corresponding increase in the

Artificial Intelligence
number of man-hours worked, generally without significant increases in the complexity of the annotation process.

There are three major aspects of any natural language understanding theory: 1. Syntax - The syntax describes the form of the language. It is usually specified by a grammar. Natural language is much more complicated than the formal languages used for the artificial languages of logics and computer programs. 2. Semantics - The semantics provides the meaning of the utterances or sentences of the language. Although general semantic theories exist, when we build a natural language understanding system for a particular application, we try to use the simplest representation we can. For example, in the development that follows, there is a fixed mapping between words and concepts in the knowledge base, which is inappropriate for many domains but simplifies development. 3. Pragmatics - The pragmatic component explains how the utterances relate to the world. To understand language, an agent should consider more than the sentence; it has to take into account the context of the sentence, the state of the world, the goals of the speaker and the listener, special conventions, and the like.

4.4. Artificial intelligence (video games):


In video games, artificial intelligence is used to produce the illusion of intelligence in the behavior of non-player characters (NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general. Ever since the beginning of AI, there has been a great fascination in pitting the human expert against the computer. Game playing provided a high-visibility platform for this contest. It is important to note, however, that the performance of the human expert and the AI gameplaying program reflect qualitatively different processes. More specifically, as mentioned earlier, the performance of the human expert utilizes a vast amount of domain specific knowledge and procedures. Such knowledge allows the human expert to generate a few promising moves for each game situation (irrelevant moves are never considered). In contrast, when selecting the best move, the game playing program exploits brute-force computational speed to explore as many alternative moves and consequences as possible. As the computational speed of modern computers increases, the contest of knowledge vs. speed is tilting more and more in the computers favor, accounting for recent triumphs like Deep Blue's win over Gary Kasparov. Readings for this section have been organized into two groups. In the first are two papers that discuss game playing in general - describing certain attempts, and certain techniques common to the area, the next two sites include an updated report on the advances in the field and a site with many different examples of AI in games. The second deal specifically with Deep Blue's success, and its implications both for AI, and for society.

Artificial Intelligence

Game Playing Computers, Games, and The Real World This Scientific American article by Matthew Ginsberg discusses the current work being done in game-playing AI, such as the now-famous Deep Blue. While it may seem somewhat frivolous to spend all this effort on games, one should remember that games present a controlled world in which a good player must learn to problem solve rapidly, and intelligently. Machine learning in Game playing - This resource page, assembled by Jay Scott, contains a large amount of links to issues related to machine game playing. Report on games, computers and artificial intelligence - An updated, and easy to understand summary of the status of game playing and AI. University of Alberta Games Group - This is a very interesting page describing the various projects with which the UofA Games Group is involved. Notice all the various games where AI seems to have surpassed human intelligence.

Deep Blue Deep Blue - The official homepage for Deep Blue. Includes a large amount of information (including game transcripts and video) on the Kasparov vs. Deep Blue rematch that ended in Deep Blue's favour. How Intelligent is Deep Blue? - Drew McDermott - a university AI lecturer - discusses the appropriateness of calling Deep Blue 'intelligent'. Does Deep Blue fail to be intelligent because it uses raw computing power to process 200,000,000 moves before deciding, or is that the essence of intelligence? What does Deep Blue Mean? - This article talks about some of the extreme reactions to Deep Blue's victory over Kasparov. Does this mean the beginning of a race of super-intelligent computers that will dominate us in everything? Is this an empty victory? Tim van Gelder looks at both interpretations.

4.5. Neural networks:


A biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic microcircuits and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion. Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Historically, digital computers evolved from the Von Neumann model, and operate via the execution of explicit instructions via access to memory by a number of processors. On the other hand, the origins of neural networks are based on efforts to model information

10

Artificial Intelligence
processing in biological systems. Unlike the Von Neumann model, neural network computing does not separate memory and processing. Neural network theory has served both to better identify how the neurons in the brain function and to provide the basis for efforts to create artificial intelligence.

11

Artificial Intelligence
5. Conclusion:
We conclude that an AI work as an artificial human brain which has unbelievable artificial thinking power. AI is an complicated, exciting and rewarding discipline. The revised definition of AI is AI is the study of mechanisms underlying intelligent behaviour through the construction and evaluation of artefacts that attempt to enact those mechanisms. Today AI has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. It is now in routine use in various field such as medicine, engineering, military, economics as well as built in many home computer software applications, games, finance, aviation and heavy industry.

12

Artificial Intelligence
6. Literature: http://www.aaai.org/ http://www.used-robots.com http://en.wikipedia.org/wiki/Neural_network http://science.howstuffworks.com http://www.webopedia.com

13

Vous aimerez peut-être aussi