Vous êtes sur la page 1sur 19



Ayse Yasemin SEYDIM

Department of Computer Science and Engineering
Southern Methodist University
Dallas, TX 75275
May, 1999


"The future of computing will be 100% driven by delegating to, rather than manipulating, computers."
Founding director of the Media Lab at MIT

In recent years, agents become a very popular paradigm in computing

because of their flexibility, modularity and general applicability to a wide
range of problems. Technological developments in distributed computing,
robotics and the emergence of object-orientation have given rise to such
technologies to model distributed problem solving. The inherent
parallelism and complexity of the classification and discovering patterns
from large amounts of data can be delegated to intelligent software agents
in this context. In this paper, the agent paradigm along with the main
applications and the use of this technology in data mining are briefly


Agents, special types of software applications, have become a very popular paradigm in
computing in recent years. Some of the reasons for this popularity is their flexibility,
modularity and general applicability to a wide range of problems. Recent increase in
agent-based applications is also because of the technological developments in distributed
computing, robotics and the emergence of object-oriented programming paradigms.
Advances in distributed computing technologies has given rise to use of agents that can
model distributed problem solving. Besides, object-oriented programming paradigm
introduced important concepts into software development process which are used in
structuring agent-based approaches.
With the explosive growth of information sources available on the Internet, and on the
business, government, and scientific databases, it has become increasingly necessary for
users to utilize automated and intelligent tools to find the desired information resources,
and to track, analyze, summarize, and extract knowledge from them. These factors
have given rise to the necessity of creating server-side and client-side intelligent systems
that can effectively mine for knowledge. Therefore, the inherent parallelism and
complexity of the classification and discovering patterns from large amounts of data can
be delegated to intelligent software agents.

In this paper, the agent paradigm is briefly discussed with its definition and
classifications. Some common example applications in which agent technologies are
used, and the communication framework are also presented in Section 1. An agent model
for learning and filtering information is examined in Section 2. The concept of Web
Mining and the place of existing agent-based applications in this scene is presented in
Section 3. Finally, possible utilization of agent paradigm in data mining process is
analyzed in Section 4 and concluding remarks are also given.


Agents are defined as software or hardware entities that perform some set of tasks on
behalf of users with some degree of autonomy [23]. In order to work for somebody as an
assistant, an agent has to include a certain amount of intelligence, which is the ability to
choose among various courses of action, plan, communicate, adapt to changes in the
environment, and learn from experience. In general, an intelligent agent can be described
as consisting of a sensing element that can receive events, a recognizer or classifier that
determines which event occurred, a set of logic ranging from hard-coded programs to
rule-based inferencing, and a mechanism for taking action [1] [4].

Other attributes that are important for agent paradigm include mobility and learning. An
agent is mobile if it can navigate through a network and perform tasks on remote
machines. A learning agent adapts to the requirements of its user and automatically
change its behavior in the face of environmental changes.

For learning or intelligent agents, an event-condition-action paradigm can be defined [2].

In the context of intelligent agents, an event is defined as anything that happens to change
the environment or anything of which the agent should be aware. For example, an event
could be the arrival of a new mail, or it could be a change to a Web page. When an event
occurs, the agent has to recognize and evaluate what the event means and then respond to
it. This second step, determining what the condition or state of the world is, could be
simple or extremely complex depending on the situation. If mail has arrived, then the
event is self-describing, the agent may then have to query the mail system to find out
who sent the mail, and what the subject is, or even scan the mail text to find keywords.
All of this is part of the recognition component of the cycle. The initial event may wake

up the agent, but the agent then has to figure out what the significance of the event in
terms of its duties. If the mail is from the boss of the user, then the message can be
classified as urgent. This gives the most useful aspect of intelligent agents- actions.

The main issue in the use of intelligent agents is the concept of autonomy [16]. The user
can give the responsibility of performing some time-consuming computer operations to
this "smart" software. By this way, the user becomes free to move on other tasks and even
disconnect from the computer while the software agent is running. Besides, the user does
not have to learn how to do the computer operation. In fact, intelligent agents can act as a
layer of software to provide the usability feature that many inexperienced users want from
computer professionals.

On the other hand, being autonomous to some degree as an assistant, agents are not
totally independent of other agents where they share an environment [12]. When the
agents are communicating and cooperating, it become possible to build organizations of
agents where the net effect is greater that the sum of the parts. The type of interaction
between the agents can then be described as cooperative when they share goals, self-
interested when their goals are totally independent. Agents having mutually exclusive
goals may be competitive, viewing other agents as opposing parties. However, the main
idea that adds interest, richness and complexity to agent systems are the interactions
between agents.

Agent-based systems can be designed by thinking the inherent interdependence between

the agents. It is important to manage the cooperative agents through communication,
negotiation, coordination and organizational division of tasks and responsibilities. When
the agents are competitive, the challenge becomes to manage and minimize the negative
effects of the agents inherent independence. Besides, the capabilities, intelligence level
of each agent, communication structures, and user interface should be considered.
Nevertheless, the notion of an agent is meant to be a tool for analyzing systems, not an
absolute characterization that divides the world into agents and non-agents [23].

A number of agent descriptions are examined in [9], and the list of attributes often found
in agents are listed as: autonomous, goal-oriented, collaborative, flexible, self-starting,
temporal continuity, character, communicative, adaptive, mobile. On the other hand, the
main characteristics of the tasks, where the agent technology is found suitable for, include
complexity, distribution and delivery, dynamic nature, information retrieval, high volume
of data handling, routine, repetitive, time critical, etc.

Some examples in which agent paradigms are frequently used include [12]:
taking the advantage of distributed computing resources such as multiprocessor
applications and distributed artificial intelligence problems,
coordinating teams of interacting robots where each robot necessarily has a physically
separate processor and is capable of acting independently and autonomously,
increasing system robustness and reliability in situations when an agent is destroyed,
others can still carry out the task,

assisting users by reducing their work and information loads,
modeling groups of interacting experts, as in concurrent engineering and other joint
decision-making processes,
simplifying modeling very complex processes as a set of interacting agents,
modeling processes that are normally performed by multiple agents, such as economic
processes involving groups of buying and selling agents.

Types of Agents

The nature of intelligent agents is such that they are optimized to perform certain
functions and tasks on behalf of a user or a computer system. [1] has given a mapping
which is done by IBM onto a graph by considering intelligence and agency. The graph,
which is shown in Figure 1, is used to compare different intelligent agents. On the
intelligence axis, agents go from simply specifying user interfaces, to active reasoning,
through rule-based expert systems, all the way up to agents that can learn as they go.
Here, intelligence is defined as the ability of the agent to capture and apply application
domain-specific knowledge and processing to solve problems.

Agency is defined as the degree of autonomy the software agent has in representing the
user to other agents, applications and computer systems. At least, agents must run
independently or asynchronously on the systems. In this respect, old PC-DOS terminate
and stay resident (TSR) programs that check keystrokes are simple-minded agents. At the
next level of agency, an intelligent agent represents the user and interact with the
operating system. More advanced agents communicate with applications running on the
system, and ultimately, interact with other intelligent agents.


Data Agents
User Function
Representation Agents


Expert Systems

Preference Reasoning Learning


Figure 1. IBM Intelligent Agent Graph [1]

Several categories or types of agents have been defined, based on their abilities and, more
often on the task they are designed to perform. The difference is their capabilities of
doing that task. It seems that with intelligent agents, as with people, knowledge and
adaptability is differentiating the successful ones from the less effective ones. The major
categories recognized are filtering agents, information agents, user interface agents, office
or workflow agents, system agents, brokering or commercial agents.

Filtering Agents

With the explosive growth of information generated each day, no one can have the time
to read through all of them. Filtering agents, as their name implies, act as a filter that
allows information of particular interest or relevance to users to get through, while
eliminating the flow of useless or irrelevant information.

The filtering agents work in several ways, in which the most widely used one is where the
user provides a template or profile of the topics or subjects that are of interest. When
presented with a list of documents in a database, a filtering agent scans the documents
and ranks them based on how well their content matches the users area of interest. On
the other hand, the filtering agent can serve as an e-mail filter, automatically filing and
disposing of messages based on their sender or on the information content.

Filtering agents could also interact with other agents, if necessary. For example, a
filtering agent could send an e-mail marked Urgent! to a Notifier or Alarm agent, which
would inform the user about the urgent message arrived. IBMs IntelliAgent is an
example of an e-mail filtering agent for use with Lotus Notes [1]. It provides a graphical
rule editor and a simple inference engine for automating routine e-mail handling.

A filtering agent with learning ability could automatically adapt the users interest profile,
refining it or broadening it, based on explicit feedback from the user, or by watching
which articles or documents get saved and which get deleted. An agent architecture model
which is used in implementation of a filtering application is presented in Section 2.

Information Agents

A parallel agent type to the filtering agent, which cuts down the information received, is
the information agent, which goes out and actively finds information for the user.
Information agents which are used primarily on the Internet and World Wide Web, can
scan through online databases, document libraries, or through directories in search of
documents that might be of interest to the user [1]. As a research or intelligence gathering
tool, information agents could provide an invaluable service, keeping the user informed
of any developments in a field or of new web sites that contain information related to
their area of interest.

User Interface Agents

A users skill may change from novice to expert when interacting with a desktop
application. User interface agents are used to monitor the user interactions with the
application and can control various aspects of that interaction, such as the level of
prompting or the number of options available. For example, a new user typically needs
lots of help and few choices. More experienced users, however, find that verbose help
gets in the way, and they want to be able to easily access all features of a product. Coach
is said to be a user interface agent that monitors the users interaction with a product and
creates personalized help based on that interaction [24].

A user interface agent can learn in four different ways as defined by [16]. First, it can
observe, monitor the users actions, find regularities and recurrent patterns and offer to
automate these. A second source for learning is direct or indirect user feedback, whereby
the user grades the agent on how well it performed the action. Third, the agent can be
trained by the examples explicitly given by the user. Finally, in the fourth method, the
agent can learn through communications with other agents. All of these approaches imply
supervised learning from a data mining perspective.

Office or Workflow Agents

An office management agent automates kinds of routine, daily tasks that take up so much
time at the office. These tasks include scheduling meetings, sending faxes, holding
meeting review information, and updating process documents. Some of these tasks can be
considered under the workgroup or workflow software because they deal with
documents and calendars. Whatever the name that ultimately gets attached to these
agents, their role in automating common business functions are most likely produce some
of the biggest efficiency gains of any intelligent agent applications.

System Agents

System agents are software agents whose main job is to manage the operations of a
computing system or a data communications network. These agents monitor for device
failures or system overloads and redirect work to other parts in order to maintain a level
of performance and/or reliability. As computer installations become more distributed, the
importance of system agents rises.

Network management agents have existed for years. For example, using Simple Network
Management Protocol (SNMP), these agents reside on devices connected to the network
and collect and report status information to the managing computer. An intelligent agent
that processes information collected by SNMP agents and uses it to detect anomalies that
typically precede a fault is proposed in [14]. The SNMP agents collect information about
the network node through their management information base (MIB), which holds a set of
variables pertinent to a particular node. The intelligent agents learn the normal behavior

of each measurement variable and combine the information in the probabilistic
framework of a Bayesian Network. This gives a picture of the networks health from the
perspective of the network node, which can be used to trigger local corrective action or a
message to a centralized network manager.

On the other hand, intelligent system agents are involved not only in monitoring the
status of resources on the computer network, but also they are active managers of those
resources. System agents must be proactive, responding not only to specific events in the
environment, but taking the initiative to recognize the situations that call for preemptive
actions [1]. Intelligent agents on a computer system could handle job scheduling to meet
performance goals. They also could be used to automatically adapt the allocation of
system resources to various classes of jobs. In this case, neural networks are used to
model the relationships between the computer workload, available resources, and the
resulting performance. Acting as an intelligent resource manager, a neural network
controller could response to changes in the workload by reallocating the computer system
resources to balance the impact on the response times of various job classes. Similar
approaches have been used to balance work load across distributed computer systems,
and to satisfy the quality of service levels in data communication networks.

Brokering or Commercial Agents

An agent that acts as a broker is a software program that takes a request from a buyer and
searches for a set of possible sellers using the buyers criteria for the item of interest.
When the potential sellers are found to satisfy the request, the broker agent can return
results to the user, who chooses a seller, and manually executes the transaction. The agent
can also automatically execute the transaction on behalf of the user. In this trade, both
parties must trust their agents ability to protect interests and to meet their criteria.

The agents internal knowledge must be opaque, since it must not be seen by the other
agent. On the other hand, the agents identity must be verifiable, to make sure meeting a
legitimate seller agent, and not to have invalid transaction in the users credit card. In this
case of agent architecture, multi-agent based system issues become very important since
many exciting applications involve the interaction of multiple agents. The communication
framework, the knowledge representation and belief systems in multi-agent interaction
environment has many research issues. The efforts for standardization and
communication for agents re presented in the following paragraphs.

Communication Framework for Agents

Like an object, an agent provides a message-based interface independent of its internal

data structures and algorithms. The primary concern is how to reach the agent through
this interface by using a language that it can understand. As an exploration of

communication, ARPA Knowledge Sharing Effort(1) have defined the components of an
agent communication language (ACL) that satisfies this need.

ACL is defined as having three parts: its vocabulary, an inner language called Knowledge
Interchange Format (KIF), and an outer language called Knowledge Query Manipulation
Language (KQML) [10]. An ACL message is a KQML expression in which the
arguments are terms or sentences in KIF formed from words in the ACL vocabulary.
The vocabulary of ACL is listed in a large and open-ended dictionary or words
appropriate to common application areas. Each word in the dictionary has an English
description and each word has formal annotations written in KIF for use by programs.

KIF is a language that was designed for the interchange of knowledge between agents.
Based on the predicate logic, KIF is a flexible knowledge representation language that
supports the definition of objects, functions, relations, rule and meta-knowledge. While it
is possible to design an entire communication framework in which all messages take the
form of KIF sentences, this would be inefficient. The efficiency of communication can be
enhanced by providing a linguistic layer in which context is taken into account. This
function is accomplished by KQML. KQML is an evolving standard as an ACL and is
defined to be both a message format and a message-handling protocol to support run-time
knowledge sharing among agents [8] [20].

While this effort is found to be premature, it is certain that some sort of common dialect
is needed for intelligent agents to communicate effectively. Agent architectures can fit
into a much more general effort to support interactions among various software entities.
Numerous approaches and technologies exist to support inter-process, inter-application,
and inter-communication over computer networks (e.g. DCE, TCP/IP, OMG/CORBA,
OLE, ODBC, OpenDoc). Agent interactions- communications in ACL- can be layered on
top of many of these protocols. Other implementations have been performed that allow
agents to interact with non-agent applications or data sources.

Recent studies in Java based application development technology have also important
impacts on agent paradigm. A Java program can communicate with other programs using
sockets and in an application it can be a separate thread of control. Java supports threaded
applications and provides support for autonomy using both techniques. An agent can be
formed by sending it an event, which is a message or method call which defines what
happened or what action the agent must perform, as well as the data required to process
the event. There have been many research in this area, and some of the Java-based Agent
development environments include Aglets-from IBM, FTP Software Agent Technology,
Voyager, Odyssey, JATLite, InfoSleuth, Jess, ABE (Agent Building Environment)/IBM.
Some of them provides automation, portability, mobility but not intelligence.

ARPA Knowledge Sharing Effort is a consortium to develop conventions facilitating sharing and reuse of
knowledge bases and knowledge based systems [Neches].

Some Applications with Agents

Agent paradigms are useful for modeling communities of interacting robots in which each
robot is represented as a separate agent. Applications in which groups of interacting
robots can be used include nuclear waste clean-up, automated excavation, construction
and mining, and exploration of other planets like Mars [12].

An increasingly common type of agents are Web-based agents that take advantage of the
network of information through the WWW. An agent called Sofbot is implemented to act
as an intelligent personal assistant to Web users searching for information [6]. The agent
will figure out the best specific search method and data source for satisfying the users
request. A compact disk shopping agent (BargainFinder), an e-mail filtering agent
(Maxims), a meeting scheduler (MCL), a news filtering agent (NewT) are a set of Internet
related examples. Also, there are a number of entertainment applications in which agents
play a role. A system called ALIVE is modeling virtual story telling where the characters
in the story are modeled by the agents that can interact with each other or with the user

One of the most interesting applications for systems that can reason about the beliefs of
other agents is in the area of softbots that act as automated personal assistants. As an
example, a business traveler might employ a personal software agent to book airline
flights. The agent would know the traveler's preferred travel times and have the authority
to represent the traveler and even make purchases on his/her behalf. The agent would
search the networks for the best price and alert the traveler if it found a special deal that
might cause a change in itinerary. On the day of the flight, another agent might
periodically communicate with an agent at the airline to monitor the status of the flight
and alert the traveler of any delays.

Agents can also be used to model or simulate processes that are normally carried out by
multiple independent agents in the real world. Consequently, agent paradigm become
popular in modeling economic and business practice. Using agents in electronic market
place, as buying, selling agents or automatically bidding in electronic auctions are
shaping the way in which commerce takes place and increasing the use of information
technology in businesses.

Another important use of agents is in monitoring, such as watching gauges in nuclear

power plants, watching patients in intensive care, etc. These agents can help for the
boring tasks and manage the information overload during crises. The degree of delegating
duties to monitoring tasks depends on how much the capability and trust is given to the

As discussed before, agents are often used to assist in group work such as decision
making and problem solving. Support of group activities using computer tools, which is
referred to as Computer Supported Cooperative Work (CSCW), can be performed by
agent-based systems. SEDAR [12] is an example of a specific agent that acts as a

consultant for design and manufacturing purposes. Another example would be planning
agents in the battlefield, which can reduce cognitive workload by helping to differentiate
important information, generate plan alternatives, redesign due to sudden changes in
circumstances, and assess hypotheses about the world, based on uncertain data.

Mobile Agents

An agent needs to interact with its host system and other agents in order to be useful.
Mobile agents are defined as programs which may be dispatched from a client computer
and transported to a remote server computer for execution and interaction with other
agents [11]. For certain applications, it is useful for agents to be able to move within
heterogeneous networks of computers. This is possible only if there is common
framework for agent operations across the entire network. One approach is to provide a
framework which has an agent infrastructure, which is built using tools that is available
for supporting agent mobility and communication as they move among different
computers and networks.

Mobile agents are such agents that can move in a computer network from host to host as
needed for performing their tasks [25]. Created in one execution environment, they can
transport their state and code with them to another execution environment in the network,
where they resume execution. There are seven reasons indicated for using mobile agents
in [15], which are:
reducing the network load (reducing raw data flow over network),
overcoming network latency in real-time systems,
encapsulating protocols (establishing channels for communication),
asynchronous and autonomous execution (dispatching from the mobile
devices and becoming independent),
dynamic adaptation (sense and adapt to changes, optimal configuration for
solving a problem),
heterogeneity (dependent only their execution environments),
robust and fault tolerant (dynamic, continue on another host).

Mobile agents offer an extension of the remote programming paradigm. When the degree
of autonomy is concerned, the agent is able to make intelligent decisions regarding to its
itinerary and modify it dynamically related to the information becomes available when it
moves from one host to another. In many data mining and knowledge discovery tasks, a
mobile agent can visit geographically distributed, data repositories and return with
knowledge, may be in the form of concise rules, that captures the observed regularities in
the data [25] [15].

Mobile agents provide a potentially efficient framework for performing computation in a

distributed fashion at sites where the relevant data is available instead of expensive
transferring large amount of data across the network. On the other hand, mobility
introduces additional complexity to an intelligent agent, because it raises concerns about
security and cost.


A learning agent can adapt to its users likes and dislikes. It can learn which agents to
trust and cooperate with, and which ones to avoid. A learning agent can recognize
situations it has been in before and improve its performance based on prior experience.
While more difficult to implement, a learning agent would obviously be much more
valuable than a fixed-function agent. Learning provides the mechanism for an initially
generic filtering agent to adapt and become a truly personal filtering agent.

The ultimate goal for intelligent agents is have them learn as they perform tasks for the
user. Depending on the technology used, learning could be done in a number of ways [2]:
rote learning (mechanical): copies example and exactly reproduces the
parameter or weight adjustment: adjust the weighting factors over time and
improve the likelihood of a correct decision (neural network learning),
induction: process of learning by example where extracting the important
characteristics of the problem to allow generalizations of inputs. (decision
trees and neural networks both perform induction which can be used for
classification or prediction problems),
clustering, chunking or abstraction of knowledge: detect common patterns and
generalize to new situations,
clustering: look to the high-dimensional data and score them for similarity
based on some criterion (similarity used as a way of assigning meaning to the
group of samples).

The use of learning techniques to develop a profile of a users preferences not only
eliminates the need for programming rules, but also allows the agent to adapt to changes.
Therefore, learning as applied to data mining, can be thought of as a way for intelligent
agents to automatically discover knowledge rather than having it predefined using
predicate logic, rules or some other representation.

An interface agent architecture, which is learning from observations is developed and

applied to two different information filtering domains; classifying incoming mail
messages (Magi) and identifying interesting USENet new articles (UNA) [21]. A
graphical user interface (GUI) is used to interact with the underlying application. As it is
used, observations are made from which the agent can produce a user profile. The
learning interface agent architecture is given in Figure 2. These observations, consisting
of articles and actions performed on them, are used to generate training examples, by
passing them to the Feature Extraction Module. Features are extracted from the new
articles are passed to the Classification stage, where the user profile is used to classify
them. The results are then evaluated by the Prediction stage and a prediction is made.

A classification in this context is an action that the agent believes should be performed on
the message. The prediction stage evaluates the strength of the classification for each new
message and generates a confidence rating, which is a measure of the certainty of the
classification [22]. The classification and confidence rating together form the agents
prediction. For the interest rating, Feature Extraction module identifies fields in the news
articles and extracts the words based on their frequency within the text. The term values
are used to generate the user profile and subsequently make predictions about the articles.

Two different learning algorithms have been used within this architecture: a rule
induction algorithm, CN2, and a k-nearest neighbor (k-NN) called IBPL [21]. CN2
generates human comprehensible rules by performing induction over training data
containing specific features. On the other hand, IBPL is chosen to contrast the symbolic
approach, and to overcome some of the problems encountered by CN2 during learning
from textual data. In CN2, a large number of examples containing single values for each
attribute must be generated. However, in IBPL, each attribute contains a set of one or
more symbolic values.

Profile Generation

Classification Feature Extraction

Prediction Observation

Feature Extraction


Figure 2. A Learning Interface Agent Architecture [21]

CN2 is a supervised learning algorithm that constructs ordered production rules. While
using these rules for the classification stage, many examples are generated from each new
article. This provides a means of producing a confidence rating for each prediction made
by the agent. The examples may fire different rules, leading to different classifications.
The number of rules which fire for each classification are added and a confidence rating
is generated. On the other hand, IBPL derive a classification by comparing the new
instance with the previously classified instances. This comparison is done by calculating a
similarity measure (distance) between values using a mapping of each symbol to a

distribution matrix. For the details of the classification algorithms, reader should see the
original papers [21] and [22].

As new observations are made and new training examples are created, the examples are
time stamped. By this way, the training set can be pruned with respect to time so that the
number of training examples can be reduced and old examples (which may not reflect the
users current interest) can be removed. These learning approaches differ in the way new
training data are integrated into the user profile. CN2 periodically (e.g. every night)
induces a new rule set, pruning out old examples and adding new examples. On the other
hand, IBPL introduces new examples into the training set as soon as they are created, so
the effects are immediate.

The main concern in the study [21], is the development of a testbed to explore the
different aspects of the interface agent technology. The difference in induction time, and
application to user profiles, size of the training examples and user log files need to be
considered in the design of such intelligent, learning agent-based systems.

The feasibility of using machine learning in the design of intelligent agents for
information retrieval and document classification is also explored in [25]. Voyager
Mobile Agent Platform has been used for the implementation and three classifiers,
namely TF-IDF, Naive Bayesian and DistAI, are used for classification. Classifiers are
incorporated into mobile agents and the relevant documents which are determined by
these classifiers at remote site are then returned to the local site. Instead of downloading
all documents from the remote sites, only the related documents are found and
downloaded which in turn minimizes the duration of the network connection.


With the explosive growth of information available on the Internet, using some
automated tools for finding the needed information and the necessity of developing
server-side and client-side intelligent systems for mining the knowledge has become very
important. The term Web Mining is defined as the discovery and analysis of useful
information from World Wide Web [3]. This includes the automatic search of
information resources available on-line, i.e., Web Content Mining, and the discovery of
user access patterns, i.e., Web Usage Mining. The related classification of Web mining is
given in Figure 3.

Web Content Mining

The unstructured characteristic of the information sources on the World Wide Web makes
automated discovery of Web-based information difficult. The traditional search engines
like Lycos, AltaVista, WebCrawler, provides some information to users but dont provide
structural information and categorization, filtering or interpretation of the documents.
These factors caused many researchers to build more intelligent tools for information

retrieval, such as intelligent Web agents, and to extend data mining techniques to provide
a higher level of organization for semi-structured data available on the Web [7].

Web Mining

Web Content Web Usage

Mining Mining

- Preprocessing
- Transaction Identification
- Pattern Discovery Tools
Agent Based Database
- Pattern Analysis Tools
Approach Approach

- Intelligent Search Agents - Multilevel Databases

- Information Filtering/ - Web Query Systems
- Personalized Web Agents

Figure 3. Taxonomy of Web Mining [3]

Agent-based approaches in Web mining include intelligent search agents, information

filtering/ categorization agents and personalized Web agents. Several intelligent agents
have been developed that search for relevant information using domain characteristics
and user profiles to organize and interpret the discovered information. Agents, such as
Harvest, FAQFinder, Information Manifold, OCCAM, and ParaSite are these kind of
products. Many Web agents using various information retrieval techniques are also
developed to automatically retrieve, filter and categorize the Web documents. HyPursuit
and BO(Bookmark Organizer) uses hierarchical clustering techniques to organize the
collection of the Web documents retrieved. New clustering techniques which are based
on generalizations of graph partitioning and capable of automatically discovering
document similarities or associations are implemented for Web page categorization and
feature selection [17]. Personalized Web agents are based on learning the user preferences
and using collaborative filtering, such as WebWatcher, PAINT, Syskill & Webert,
GroupLens and Filefly.

Database approaches to Web mining have focused on techniques for organizing the semi-
structured data on the Web into more structured collections of resources like relational
databases consisting of levels of Web repositories or metadata in a hierarchy.
Consequently, using standard querying mechanisms and data mining techniques, the
gathered information can be analyzed. For example, TSIMMIS extracts data from
heterogeneous and semi-structured information sources and correlates them to generate an
integrated database representation of the extracted information [3].

Web Usage Mining

Web usage mining is the automatic discovery of user access patterns from Web servers.
Organizations collect large volumes of data in their daily operations, generated
automatically by Web servers and collected in server access logs. Other sources of user
information include referrer logs which contain information about the referring pages for
each page reference, and user registration or survey data gathered via CGI scripts.

Analyzing such data can help organizations determine the life time value of customers,
cross marketing strategies across products, and effectiveness for promotional campaigns.
It can also provide information on how to restructure a Web site to service effectively.
User access patterns help in targeting advertisements to specific groups of users. More
sophisticated systems and techniques for discovery and analysis of patterns are
developing. For pattern discovery, techniques from artificial intelligence, data mining,
psychology, and information theory are used to mine the knowledge from collected data.
For example, WEBMINER system automatically discovers association rules and
sequential patterns from server access logs and it also proposes an SQL-like query
mechanism for querying the discovered knowledge. A general Web Usage Mining
architecture have been developed by [3] and partly implemented in WEBMINER system.
Since our major perspective for Web mining is from the agent paradigm, the reader
should refer to the related study for details of this architecture.


In several steps through knowledge discovery, which include data preparation, mining
model selection and application, and output analysis, intelligent agent paradigm can be
used to automate the individual tasks. In data preparation, agent use can be especially on
sensitivity to learning parameters, applying some triggers for database updates and
handling missing or invalid data.

In data mining model, we have seen the agent-based studies are implemented for
classification, clustering, summarization and generalization which have learning nature
and rule generation since current learning methods are able to find regularities in large
data sets. An intelligent agent can use domain knowledge with embedded simple rules
and using the training data it can learn and reduce the need for domain experts. In the
interpretation of what is learned, a scanning agent can go through the rules and facts
generated and identify items that can possibly contain valuable information.

Data preparation in data mining involves data selection, data cleansing, data
preprocessing, and data representation [1]. With the use of intelligent agents, several of
these steps can possibly be automated. One possibility for automating the data selection
step, is to perform automatic sensitivity analysis to determine which parameters should be
used in learning. This would reduce the dependency of having domain expert available to
examine the problem every time something changes in the environment.

Data cleansing could be automated through the use of an intelligent agent with a rule
base. When a record is added or updated in a relational database, a trigger could call the
intelligent agent to examine the transaction data. The rules in its rule base would specify
how to cleanse missing or invalid data.

Data preprocessing also requires domain knowledge, since there is no way to know the
semantics of the attributes and relationships like computed or derived fields. However,
more standard preprocessing and data representation steps such as scaling or
dimensionality reduction, symbol mapping, and normalization, which are usually
specified by the data mining expert, could be automated using rules and basic statistical
information about variables.

Searching for patterns of interest by using learning and intelligence in classification,

clustering, summarization and generalization can also be accomplished by intelligent
agents. An agent can learn from a profile or from examples and feedback from user can
be used to refine confidence in agents predictions. An intelligent agent can use domain
knowledge with embedded simple rules and using the training data it can learn and reduce
the need for domain experts. Data mining using neural networks and possible intelligent
agent use in data mining process are discussed in [1]. In the understanding of what is
learned, agent use can be only as a fixed-agent or simply a program in visualization.

The major advantage of using intelligent agents in automation of data mining is indicated
as their possible support for online transaction data mining. When new data is added to
the database, an alarm or triggering agent can send events to the main mining application
and to the learning task in it, so that new data can be evaluated with the already mined
data. This automated decision support using triggers in data mining is called as active
data mining by Agrawal and Psalia. Since, the main mining functions can be performed
by using learning methods, the implementation and application of these methods by using
intelligent agents will provide flexible, modular and delegated solution. Additionally, this
paradigm can be used in the parallelization of the data mining algorithms according to its
usability in distributed environments.


The exponential growth of available information requires to develop useful, efficient

tools and software to assist users in reaching out the valuable ones. Special, flexible
software programs, software agents can be used for automating the discovery of this
information. With some degree of autonomy, agents can include a certain amount of
intelligence to apply the domain-specific knowledge to retrieve, filter and classify
information, find patterns and make predictions. This paper included a short survey of
agent paradigm in the context of information retrieval, filtering, classification and
learning and possible use in data mining tasks.

Agent-based approaches are becoming increasingly important because of their generality,
flexibility, modularity and ability to take advantage of distributed resources. Agents are
used for information retrieval, entertainment, coordinating systems of multiple robots,
and modeling economic systems. They are useful in reducing work and information
overload, in complex tasks such as medical monitoring and battlefield reasoning. Agents
provide an efficient framework for distributed computation where the retrieval of only
relevant documents minimize the duration of the expensive network connection.

There have been a lot of work done in the area of artificial intelligence and software
agents but we generally looked through a data mining perspective and concentrated on
classification, retrieval of the information, and learning. Present research going on
Artificial Intelligence area including the agent paradigm is indicated by [5] are:

- autonomous agents and robots which integrates the other areas such as:
knowledge representation, learning, decision-making, speech and language processing,
image analysis, to create robust, active entities capable of independent, intelligent, real-
time interactions with an environment over an extended period,
- multiagent systems which identify the knowledge, representations, and
procedures needed by agents to work together or around each other.

Continuing research issues include agent architectures, communication and coordination

protocols, control negotiation, and reuse of the agents [12]. Agents used in Web are
already changing the way of gathering information, conducting business, and have a great
impact on our lives. As suggested in [13], because of the dynamic nature of the Internet,
the growth of data and the heterogeneity of the services, extracting valuable information
from the huge amount of stored data is becoming a task that can not be performed by
users alone. Therefore, intelligent agent paradigm can be used in many applications with
distributed nature and learning mechanisms.


[1] J.P.Bigus, Data Mining with Neural Networks - Solving Business Problems - from
Application Development to Decision Support, McGraw-Hill, 1996.

[2] J.P.Bigus, J. Bigus, Constructing Intelligent Agents with Java - A Programmers

Guide to Smarter Applications, John Wiley & Sons Inc., 1998.

[3] R.Cooley, B.Mobasher, J.Srivastava, "Web Mining: Information and Pattern

Discovery on the World Wide Web", Proceedings of Ninth IEEE Int. Conference on
Tools with Artificial Intelligence, Newport Beach, California, November 3-8, 1997.

[4] T.Dean, J.Allen, Y.Aloimonos, Artificial Intelligence: Theory and Practice, The
Benjamin/Cummings Publishing Co. Inc., 1995.

[5] J.Doyle, T.Dean, et.al.,"Strategic Directions in Artificial Intelligence", ACM

Computing Surveys, vol.28, no.4, pp.653-670, 1996.

[6] O.Etzioni, D.A.Weld, A Sofbot Interface to the Internet, Communications of the

ACM, vol.37, no.7, pp.72-79, July 1994.

[7] O.Etzioni, The World-Wide Web: Quagmire or Gold Mine?, Communications of

the ACM, vol.39, no.11, pp. 65-68. Nov. 1996.

[8] T.Finin, R.Fritzson, D.McKay, R.McEntire, KQML as an Agent Communication

Language, http://www.cs.umbc.edu/kqml/papers/kqml-acl-html/root2.html, 1994.

[9] S.Franklin, A.Graesser, Is It an Agent, or Just a Program?: A Taxonomy for

Autonomous Agents, Intelligent Agents III: Agent Theories, Architectures, and
Languages, Proceedings of ECAI96 Workshop(ATAL), Hungary, Aug.1996 (From
Lecture Notes in Artificial Intelligence 1193, pp. 21-35, 1997).

[10] M.R.Genesereth, S.P.Ketchpel, Software Agents, Communications of the ACM,

vol.37, no.7, pp. 48-53, July 1994.

[11] C.G.Harrison, D.M.Chess, A.Kershenbaum, Mobile Agents: Are they a good

idea?, IBM Research Report, T.J.Watson Research Center, NY, 1995.

[12] C.C.Hayes, "Agents in a Nutshell- A Very Brief Introduction", IEEE Transactions

on Knowledge and Data Engineering, vol.11, no.1, pp.127-132, Jan/Feb 1999.

[13] B.Hermans, Intelligent Software Agents on the Internet: an inventory of currently

offered functionality in the information society & a prediction of (near-)future

developments", Tilburg University, Tilburg, The Netherlands, July 1996,
http://www.hermans.org/agents/ (downloaded : 4/29/99).

[14] C.S.Hood, C.Ji, Intelligent Agents for Proactive Fault Detection , IEEE Internet
Computing, vol.2, no.2, pp.65-72, Mar/Apr 1998.

[15] D.B.Lange, M.Oshima, Seven Good Reasons for Mobile Agents, Communications
of the ACM, vol. 42, no. 3, pp. 88-89, March 1999.

[16] P.Maes, "Agents that Reduce Work and Information Overload", Communications of
the ACM, vol.37, no.7, pp 31-40, July 1994.

[17] J.Moore et. al. Web Page Categorization and Feature Selection Using Association
rule and Principal Component Clustering, Department of Computer Science and
Engineering, University of Minnesota, MN,
http://maya.cs.depaul.edu/~mobasher/wits/wits.html. 1997.

[18] R.Murch, T.Johnson, Intelligent Software Agents, Prentice-Hall, 1999.

[19] R.Neches, The Knowledge Sharing Effort,

http://www-ksl.stanford.edu/knowledge-sharing, 1994.

[20] H.S.Nwana, M.Wooldridge, Software Agent Technologies, Software Agents and

Soft Computing: Towards Enhanced Machine Intelligence, Lecture Notes in
Artificial Intelligence 1198, pp.59-77, 1997.

[21] T.R.Payne, P.Edwards, C.L.Green, Experience with Rule Induction and k-Nearest
Neighbor Methods for Interface Agents that Learn, IEEE Transactions on
Knowledge and Data Engineering, vol.9, no.2, pp. 329-335, Mar/Apr 1997.

[22] T.R.Payne, P.Edwards, Interface Agents that Learn: An Investigation of Learning

Issues in a Mail Agent Interface, Applied Artificial Intelligence, vol.11, pp.1-32,

[23] S.Russell, P.Norvig, Artificial Intelligence: A Modern Approach, Prentice-Hall,


[24] T.Selker, Coach: a Teaching Agent that Learns, Communications of the ACM,
vol.37, no.7, pp. 93-99, 1994.

[25] J.Yang, P.Pai, V. Hanovar, L.Miller, "Mobile Intelligent Agents for Document
Classification and Retrieval: A Machine Learning Approach", Proceedings of the
European Symposium on Cybernetics and System Research, Vienna, Austria, 1998.