Académique Documents
Professionnel Documents
Culture Documents
2. Engineering
The accomplishments of engineering can be seen in nearly every aspect of our daily lives, from
transportation to communications, and entertainment to health care. And, although each of these applications is
unique, the process of engineering is largely independent. This process begins by carefully analyzing a problem,
intelligently designing a solution for that problem, and efficiently transforming that design solution into physical
reality.
3. Types of Manufacturing
Although there are many ways to categorize manufacturing, three general categories stand out (which
probably have emerged from production planning and control lines of thought):
1. Job-shop production
. A job shop produces in small lots or batches.
2. Mass production
. Mass production involves machines or assembly lines that manufacture discrete units repetitively.
3. Continuous production
. The process industries produce in a continuous flow.
Primary differences among the three types center on output volume and variety and process flexibility.
Table 1 matches these characteristics with the types of manufacturing and gives examples of each type. The
following discussion begins by elaborating on Table 1. Next are comments on hybrid and uncertain types of
manufacturing. Finally, five secondary characteristics of the three manufacturing types are presented.
Examples Tool and die making, Auto assembly, bottling, Paper milling, refining,
casting (foundry) , baking
(bakery) apparel manufacturing extrusion
Similarly, mass production of apparel can employ production lines, with stoppages for pattern changes.
More conventionally, the industry has used a very different version of mass production: Cutters, sewers, and
others in separate departments each work independently, and material handlers move components from
department to department to completion. Thus, existence of an assembly line or production line is not a
necessary characteristic of mass production.
4. Intelligent Agents
4.1. Agent Definition
Although the word agent is used in a multitude of contexts and is part of our everyday vocabulary, there is
no single all-encompassing meaning for it. Perhaps, the most widely accepted definitions for this term is that "an
agent acts on behalf of someone else, after having been authorized". This definition can be applied to software
agents, which are instantiated and act instead of a user or a software program that controls them. Thus, one of the
most characteristic agent features is its agency. In the rest of this book, the term agent is synonymous to a
software agent. The difficulty in defining an agent arises from the fact that the various aspects of agency are
weighted differently, with respect to the application domain at hand. Although, for example, agent learning is
considered of pivotal importance in certain applications, for others it may be considered not only trivial, but even
undesirable. Consequently, a number of definitions could be provided with respect to agent objectives.
Wooldridge & Jennings have succeeded in combining general agent features into the following generic,
nevertheless abstract, definition:
An agent is a computer system that is situated in some environment, and that is capable of autonomous
action in this environment, in order to meet its design objectives [Wooldridge and Jennings, 1995].
It should be denoted that Wooldridge & Jennings defined the Notion of an "agent" and not an "intelligent
agent". When intelligence is introduced, things get more complicated, since a nontrivial part of the research
community believes that true intelligence is not feasible [Nwana, 1997]. Nevertheless, "intelligence" refers to
computational intelligence and should not be confused with human intelligence [Knapik and Johnson, 1998].
A fundamental agent feature is the degree of autonomy. Moreover, agents should have the ability to
communicate with other entities (either similar or dissimilar agents) and should be able to exchange information,
in order to reach their goal. Thus, an agent is defined by its interactivity, which can be expressed either as
proactivity (the absence of passiveness) or reactivity (the absence of deliberation) in its behavior. Finally, agents
can be defined through other key features, such as their learning ability, their cooperativeness, and their mobility.
One can easily argue that a generic agent definition, entailing the above characteristics, cannot satisfy
researchers in the fields of Artificial Intelligence and Software Engineering, since the same features could be
ascribed to a wide range of entities (e.g., man, machines, computational systems, etc.). Within the context of this
book, agents are used as an abstraction tool, or a metaphor, for the design and construction of systems. Thus, no
distinction between agents, software agents and intelligent agents is made. A fully functional definition is
employed, integrating all the characteristics into the Notion of an agent:
An agent is an autonomous software entity that -functioning continuously - carries out a set of goal-oriented
tasks on behalf of another entity, either human or software system. This software entity is able to perceive its
environment through sensors and act upon it through effectors, and in doing so, employ some knowledge or
representation of the user's preferences [Wooldridge, 1999].
For each possible percept sequence, a rational agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the percept sequence, and whatever built-in knowledge
the agent has.
• Rationality is distinct from omniscience (all-knowing with infinite knowledge)
• Agents can perform actions in order to modify future percepts so as to obtain useful information
(information gathering, exploration)
• An agent is autonomous if its behavior is determined by its own experience (with ability to learn and
adapt)
A simple agent program can be defined mathematically as an agent function which maps every possible
percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or
constant that affects eventual actions:
Among computational agents we may identify also a broad category of agents, which are in fact
nowadays the most popular ones, namely those that are generally called software agents (or weak agents, as in
Wooldridge and Jennings, 1995, to differentiate them from the cognitive ones, corresponding to the strong
notion of agent): information agents and personal agents. An information agent is an agent that has access to one
or several sources of information, is able to collect, filter and select relevant information on a subject and present
this information to the user. Personal agents or interface agents are agents that act as a kind of personal assistant
to the user, facilitating for him tedious tasks of email message filtering and classification, user interaction with
the operating system, management of daily activity scheduling, etc.
Intelligent Agent
Agent
Type Percepts Actions Goals Environment
Display
Interactive exercises, Maximize Set of
English suggestions, student’s students,
tutor Typed words corrections exam results exam papers
Environments in which agents operate can be defined in different ways. It is helpful to view the following
definitions as referring to the way the environment appears from the point of view of the agent itself.
Episodic?? No No No No
Single-Agent?? Yes No No No
The basic idea in an object-oriented approach is to view a software system as a collection of interacting
entities called "objects”;
• Objects defined by an identity, a state (member variables), and a behavior (invoked methods)
• The interactions among objects are described in terms of ‘messages’,
• ‘Instance’ objects sharing common characteristics are usually grouped into classes. A number of
different relationships can hold among them. Fundamental ones are inheritance / classification and
decomposition / aggregation that relate classes to each other, and ‘instance-of’ which relates a class
to its instances.
E.g. SmallTalk, C++, Java.
A software system is viewed as a collection of interacting entities called agents. As objects, agents have an
identity, a state and a behaviour, but these are described in more sophisticated terms:
– State: knowledge, beliefs, goals, responsabilities, etc.
– Behaviour: roles that can be played, actions that can be performed, reactions to events,
responsibilities, etc.
The behaviour of an agent is defined in terms of how to decide what to do (and not in terms of what it
should do).
• Agents are autonomous, that is, they act on behalf of the user
• Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt
to changes in the environment
• Agents don't only act reactively, but sometimes also proactively
• Agents have social ability, that is, they communicate with the user, the system, and other agents as
required
• Agents may also cooperate with other agents to carry out more complex tasks than they themselves can
handle
• Agents may migrate from one system to another to access remote resources or even to meet other
agents
1. Conclusions
The concept of agent is associated with many different kinds of software and hardware systems. Still, we
found that there are similarities in many different definitions of agents.
Unfortunately, still, the meaning of the word “agent” depends heavily on who is speaking.
There is no consensus on what an agent is, but several key concepts are fundamental to this paradigm. We
have seen:
– The main characteristics upon which our agent definition relies
– Several types of software agents
– In what an agent differs from other software paradigms
1.1. Discussion
• Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.),
Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, http://aima.cs.berkeley.edu/ , chpt. 2
• Womack, Jones, Roos; The Machine That Changed The World, Rawson & Associates, New York.
Published by Simon & Schuster, 1990.
• Software Agents, Edited by Jeff M. Bradshaw. AAAI Press/The MIT Press.
• Agent Technology, Edited by N. Jennings and M. Wooldridge, Springer.
• The Design of Intelligent Agents, Jorg P. Muller, Springer.
• Heterogeneous Agent Systems, V.S. Subrahmanian, P. Bonatti et al., The MIT Press.
• Kalpakjian, Serope; Steven Schmid (2005). Manufacturing, Engineering & Technology. Prentice
Hall. pp. 22–36, 951–988. ISBN 0-1314-8965-8.
• Stan Franklin and Art Graesser (1996); Is it an Agent, or just a Program?: A Taxonomy for
Autonomous Agents; Proceedings of the Third International Workshop on Agent Theories,
Architectures, and Languages, Springer-Verlag, 1996
• N. Kasabov, Introduction: Hybrid intelligent adaptive systems. International Journal of Intelligent
Systems, Vol.6, (1998) 453-454.
• Wooldridge,M and N. R. Jennings. Agent theories, architectures, and languages. In Wooldridge
and Jennings, eds. Intelligent Agents, Springer Verlag, 1999. p.1-22.
• Dennett,D.C. The Intentional Stance. The MIT Press, 1987.
• Koller,D. and A. Pleffer. Representations and solutions for game-theoretic problems. Artificial
Intelligence 94 (1-2), 1997, 167-216.
• Pollack, M.E. The use of plans. Artificial Intelligence, 57 (1), 1992, 43-68.
• Nwana. H.S. "Research and Development Challenges for agent-based systems". IEE Proceedings
On Software Engineering, Volume 144(1), p2-10, 1997
• Knapik, M., & Johnson, J. (1998). Developing intelligent agents for distributed systems. New
York: McGraw Hill