BLOGGER TEMPLATES AND TWITTER BACKGROUNDS »

Saturday, July 08, 2006

intelligent agents paper..

Note: i made this paper for my CS 180 class this sem.. for anyone who's interested, you may read on.. this is based on franklin and graesser's paper.. references include sources from the net.. God bless everyone! =)

Intelligent agents - sounds cool and interesting! When I first read about them, I enjoyed the fact that these agents are "intelligent." How? By possessing several characteristics that could be associated with intelligence. So, what are intelligent agents? Before we get in-depth with the discussion of these agents, we must have a notion of what a software agent is.

Software agents are programs that perform task for the user. According to the definition provided by the ever-dependable Wikipedia, "a software agent is an abstraction, a logical model that describes software that acts for a user or other program in a relationship of agency. Such "action on behalf of" implies the authority to decide when (and if) an action is appropriate." A formal way of defining an agent would be as follows: an agent describes a software abstraction, an idea, or a concept, similar to OOP (object-oriented programming) terms such as methods, functions, and objects. But unlike objects, which are defined in terms of methods and attributes, an agent is defined in terms of its behavior.

Different authors have proposed their own definitions of an agent. But there are some similar concepts they have agreed on. These concepts are persistence (code is not executed on demand but runs continuously and decides for itself when it should perform some activity), autonomy (agents have capabilities of task selection, prioritization, goal-directed behaviour, decision-making without human intervention), social ability (agents are able to engage other components through some sort of communication and coordination, they may collaborate on a task), and reactivity (agents perceive the context in which they operate and react to it appropriately).

The above-mentioned information from Wikipedia gives us some background on the topic which I would discuss in this paper. I'm supposed to write something about a paper presented at the Third International Workshop on Agent Theories, Architectures, and Languages - "Is it an Agent, or Just a Program?: A Taxonomy for Autonomous Agents" by Stan Franklin and Art Graesser of the Institute for Intelligent Systems, University of Memphis. My goal is to give a sort of review of Franklin and Graesser's work. Starting with the definition of an agent, which I got from an online encyclopedia, I would elaborate the topic using the data provided in the source article. But before that, let me state my thoughts upon reading the article's abstract and introduction.

As I mentioned in the first statements of this paper, I was interested to learn more of intelligent agents because I liked the fact that they are intelligent. But then I thought, "how are they different from programs?" Definitions have stated that they are meant to perform specific task from, and for the user - but so are programs. So what is the difference then when they have the same goals and purpose of 'existence'? Are they not similar because agents are said to be "intelligent"? But I think programs have a sense of intelligence too. How would they be able to run properly if they don't have the capability to "think"? What I mean is, in order for the programs to achieve their goals, they must, first of all, know what those goals are. Somehow, they have the thinking capacity, just like agents. Another confusion factor is the fact that both "exists" in order to simplify human tasks. Aren't the two concepts interchangeable? Well, the answer lies in the heart of the article, and it would be revealed as I go on with this paper.

The Ultimate Question: What is an Agent?

When I hear the word agent, what comes to my mind are the real-life agents such as promodizers/sales agents, realty agent, or simply an agency. In simple terms, apart from the computer science world, an agent is just some sort of a mediator or a link from one entity to another - from a company to a prospect, for example. These are some of our notions of what agents are in the real world, but what about the agents in the "artificial" world - that is, the world of computer science? Here are the ideas regarding agents, taken from different sources, and which Franklin and Graesser stated in their work.

1. MuBot Agent - The term agent is used to represent two orthogonal concepts. The first is the agent's ability for autonomous execution. The second is the agent's ability to perform domain oriented reasoning.
Points Raised: There are two things which I would like to note in this definition, autonomous execution, and domain oriented reasoning.

2. AIMA Agent - An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.
Points Raised: This is the agent I'm most familiar with. And so, this would serve as my basis for comparison and analysis in the latter part of this paper. Key concepts here are environment, sensors, and actuators/effectors. An additional factor, which was not mentioned here, but is given emphasis in the book, is the performance measure. I'll give a clearer explanation later.

3. Maes Agent - Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed.
Points Raised: What I want to note here are the terms computational systems, complex dynamic environment, sense and act autonomously, and goals.

4. KidSim Agent - ...agent as a persistent software entity dedicated to a specific purpose. 'Persistent' distinguishes agents from subroutines; agents have their own ideas about how to accomplish tasks, their own agendas. 'Special purpose' distinguishes them from entire multi-function applications; agents are typically much smaller.
Points Raised: The first notable idea here is persistent, the next one is special purpose.

5. Hayes-Roth Agent - Intelligent agents continously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.
Points Raised: Key points raised here are the three functions of intelligent agents; perception, action, and reasoning.

6. IBM Agent - Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires.
Points Raised: Important terms include set of operations, degree of independence/autonomy, knowledge, and goals.

7. Wooldridge & Jennings Agent - ...a hardware or (more usually) software-based computer system that enjoys the following properties:
-- autonomy: agents operate without the direct intervention of human or others, and have some kind of control over their actions and internal state;
-- social ability: agents interact with other agents (and possibly humans) via some kind of agent-communication language;
-- reactivity: agents perceive their environment, and respond in a timely fashion to changes that occur in it;
-- pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behavior by taking the initiative.
Points Raised: The essential points raised are the underlined ones.

8. SodaBot Agent - Software agents are programs that engage in dialogs [and] negotiate and coordinate transfer of information.
Points Raised: This definition is somehow distant to the other ones. But I would like to take note of the terms dialogs, negotiate, and transfer of information.

9. Foner Agent - A software agent is a program that performs tasks for its user. While this may sound just like any program, agents have somewhat special properties, which the enormous amount of media hype (and subsequent misuse of the term) has clouded in recent years.
Points Raised: In addition to the above definition, Foner added the notions of trust, personalizability, and autonomy, which I think would be helpful in my analysis.

10. Brustoloni Agent - Autonomous agents are systems capable of autonomous, purposeful action in the real world.
Points Raised: One important idea stated here is the existence of agents in the "real" world, and of course not to forget the notion of autonomy.

11. FAQ Agent - This FAQ will not attempt to provide an authoritative definition..
Points Raised: According to the article, the FAQ Agent does provide a list of attributes often found in agents. And the list includes the following: autonomous, goal-oriented, collaborative, flexible, self-starting, temporal continuity, character, communicative, adaptive, and mobile.

ANALYSIS & COMPARISONS

Based on the facts we've gathered, we're now ready to examine which definitions overlap, and which ones are the exact opposite of the other. I've listed down several terms which are common in the definitions. Here are the similar concepts raised:

1. Autonomy (MuBot, Maes, IBM, Wooldridge & Jennings, Foner, Brustoloni, FAQ). An agent is autonomous if it could learn what it can to compensate for partial or incorrect partial knowledge. An agent should not be dependent only on what information is fed on it. In simple terms, it should undergo a process of learning.

2. Environment (AIMA, Maes, Hayes-Roth, Wooldridge & Jennings). Environments are where agents operate. Task environments are essentially, the 'problems' to which agents are 'solutions.' Although they all mentioned about environments, their attacks to the topic were not the same. AIMA Agent described a broad environment, which could either be observable or not, deterministic or stochastic, episodic or sequential, multiagent or single-agent, static or dynamic, and discrete or continuous. Maes specified that the environment is a complex dynamic one, and Hayes-Roth said it is a dynamic environment. One broader view of an environment is given by Wooldridge & Jennings. According to them, an environment could be a physical world, a user via a graphical user interface, the Internet, or all of these combined. Wooldridge & Jennings' environment has less limitations and wider scope.

3. Sensors & Actuators (AIMA, Maes, Hayes-Roth, Wooldridge & Jennings, SodaBot, Brustoloni). Agents use sensors to perceive the environment and actuators to act upon that environment. In the AIMA agent, the sensors are responsible for receiving input and actuators for the output. The Maes agent senses and acts autonomously in the dynamic environment, in order to realize a set of goals. Similarly, one of the functions performed by the Hayes-Roth agent is acting to affect conditions in the environment. Reactivity (agents perceive their environment, and respond to it) as a property of a Wooldridge & Jennings agent can also be associated with sensors and actuator, and thus belong to the same group. Another agent that could be listed here is the SodaBot. Negotiating and coordinating transfer of information as an ability of software agents, require sensing and acting. Although quite different from the others because of the idea that agents must perform purposeful actions in the real world, a Brustoloni agent still belongs to this group because of the actions involved in fulfilling its purpose.

4. Goals. Every agent is meant to achieve certain goals. They're created for a purpose, either in the artificial or in the real world.

5. Reasoning (MuBot, Hayes-Roth). The MuBot agent has the ability to perform domain
oriented reasoning and the Hayes-Roth performs reasoning to interpret perceptions, solve problems, draw conclusions, and determine actions.

STRUCTURE & ARCHITECTURE OF SOME AGENTS

1. Maes Agent. Pattie Maes has developed an agent architecture in which an agent is defined as
a set of competence modules. Each module is specified by the designer in terms of pre- and post-conditions, and an activation level, which gives a real-valued indication of the relevance of the module in a particular situation. The higher the activation level of a module, the more likely it is that this module will influence the behaviour of the agent. Once specified, a set of competence modules is compiled into a spreading activation network, in which the modules are linked to one-another in ways defined by their pre- and post-conditions.
There are obvious similarities between the agent network architecture and neural network architectures. Perhaps the key difference is that it is difficult to say what the meaning of a node in a neural net is; it only has a meaning in the context of the net itself. Since competence modules are defined in declarative terms, however, it is very much easier to say what their meaning is.

2. Hayes-Roth Agent. Barbara Hayes-Roth proposed that an intelligent agent must be adaptable, versatile, and exhibit coherent behavior. Adaptive in the sense that it has the ability to respond to an event in a dynamic environment with an acceptable response time. The agent must be versatile, meaning the agent has the ability to vary its responses in proportion to what it has already learned and on the current environmental context. Coherency requires that all the distinct systems the agent utilizes to adapt to its environment, and all the strategies the agent can take advantage of to be versatile must be integrated with a coherent overall plan of action developed by the agent.
The dynamic control architecture consists of a cognition system and independent perception and action systems. All subsystems operate concurrently and asynchronously and communicate through an independent but globally accessible communications interface(CI). This underlying modular structure allows for more appropriate (to the environment) response times by interacting with subsets of the environment concurrently thereby reducing the overall complexity each subsystem must be able to deal with.
The input and output modules consist of limited size buffers and dynamically modifiable perceptual filters, determined by the cognition component. The limited input buffers, located in the CI, are fed information at varying rates depending on the perceptive filter. Therefore the more important the data, the more often the system will see it. However, the CI has limited buffer size and therefore if the cognitive system does not look at the buffer often enough, events may go by in the world without notice. This strategy serves to further limit the environmental complexity encountered by first focusing the agents attention (perceptual filters) and then limiting the number of environmental events which can be active at one time (limited I/O buffers). Therefore, it is important for the cognitive system to function in real time and reason about the resources available to it.
The cognition system can also be broken into two subsystems. First is the knowledge base, which contains all knowledge, including factual, procedural and reasoning strategies for particular tasks. The second is the satisfactory reasoning cycle, which itself is composed of three parts. The first is the agenda manager, responsible for identifying and prioritizing reasoning tasks. The second is the scheduler, which interrupts the agenda manager when it is 'ready', and schedules the next best operation on the third component, the executor. Operations can have multiple possible effects, including modifications to the perceptual filters, intended actions, new conclusions for ongoing reasoning, etc. This enables the agent to apply multiple reasoning methods to the same problem, work on multiple problems simultaneously, and to trade off the quality for a timely response.. Within the knowledge base is a control plan which is developed by the agent in accordance with its goals, and is used to focus the reasoning cycle on completing the task at hand. This includes determining which actions have priority on the agenda, when the scheduler should interrupt, and what the perceptual filters should contain. The only changes to the control plan are those made by the agent, and hence were determined necessary by the control plan and environment at that time. This provides for a global coherence of reasoning, perception, and action within the agent while still allowing a wide variety of strategies to be applied to a specific task, including opportunistic actions and reactions.
Russell and Norvig's Artificial Intelligence: A Modern Approach clearly stated the structure of agents. They outlined four basic kinds of agent programs that embody the principles underlying almost all intelligent systems. These are simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. If we are to classify each agent mentioned in the article according to these four structures, then we would have the following.

1. Simple reflex agents. These are the simplest kind of agent. They select actions based on the current percept, without minding the rest of the percept history. An example is the MuBot agent.

2. Model-based reflex agents. These agents maintain internal state to track aspects of the world that are not evident in the current percept. Hayes-Roth and Wooldridge & Jennings agents are (I believe) model-based reflex ones.

3. Goal-based agents. They keep track of the world state as well as a set of goals it is trying to achieve, and chooses an action that will (eventually) lead to the achievement of its goals. Almost every agent is goal-based. Examples include the Maes, KidSim, IBM, and Brustoloni agents.

4. Utility-based agents. An agent of this kind uses a model of the world, along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to the best expected utility. Expected utility is computed by averaging over all possible outcome states, weighted by the probability of the outcome. AIMA agent having its performance measure, is an example of a utility-based agent.
And of course, there is the process of learning through which the agents achieve better performance. Ideal agents are the learning ones.

ON THE ESSENCE OF AGENCY
Based on the information they have gathered, the authors gave a formal definition of an autonomous agent. They stated: An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future.
If I am to analyze this definition, I would say that it is more than enough. It has organized the key ideas presented earlier into one thought. Its a sort of summary and an intersection of all the definitions provided by the various sources. It has mentioned the environment, the goals (agenda), the sensors, and actuators. It is the closest, most definite, most concise definition that we could get. In terms of being realistic, my view is that it is somehow ambitious. But that's science. I mean, in the field of technology & computer science, that is all that matters - ambitious goals, which workers aim to achieve. Without ambition, science wouldn't exist. Regarding standards, its natural to have high ones. Its not proper to settle for the mediocre, one ought to be the best and the most outstanding. If we are to make an agent, we must make it better than what we already have, or at least on the same level of intelligence as humans. In order to check these qualifications, performance measures are readily available. But as I have said, they may be too ambitious. Nonetheless, science defies limitations and impossibilities. It could go beyond what we think is possible.

CONCLUSION
Having stated everything that I have understood from the source article, it became clearer to me the difference between a program and an agent. Franklin and Graesser stressed the idea that an agent need not be a program at all; it could be a robot or a school teacher. However, software agents are, by definition, programs, but a program must measure up to several marks to be an agent. Simply, an agent is a program, but the reverse is not always true.
Agents adapt to their environment, they are flexible and autonomous - they have a sense of independence, somehow they have a 'life' and they are capable of thinking. Programs on the other hand are constants. They could vary, yes, but that would depend on the programmer. Unless the programmer changes or alters the program, it would stay in its original state.
I would like to add that while doing this paper, it came to me that humans are just like agents. We learn from the experiences that we've had, from socializing with our environment that is the community. And we adapt to changes, we are flexible. And as long as we live, we would never stop learning, in the same way as agents do.

0 comments: