Intelligent Agents on the Internet
Intelligent agents have become one of the "hottest" topics in artificial intelligence research during the last few decades. One type of these agents, those that operate on the Internet, certainly seems more promising for implementation purposes than other types. The reason is the environment in which such an agent operates, which is readily accessible to the agent since most of the Information on the Internet is accessible for the general public domain. This paper introduces the concept of intelligent agents on the Internet and analyzes some currently available agents. General characteristic of Internet agents are discussed and initial steps for a general Internet agent framework are identified. The paper concludes with a discussion of the natural language processing (NLP) techniques in the domain of Internet agents.
Internet intelligent agents have enormous potential to impact almost every part of our "digital" society. Instead of doing time-consuming search to find articles needed for their research, imagine how it would be if a professor or a student could send out "a research agent" to search through the vast domain of the Internet, collect all the relevant articles, analyze them and prepare a report which the agent would then present to its sender. Internet agents could substantially reduce the need for secretaries at work place. For instance, people could give a task to an agent to "send the annual report to John Smith". An agent would then disambiguate Smiths identity and the nature of the report through interaction with the user, prepare the report, dynamically decide how to send it (email could be sluggish that specific day, maybe fax could be used) and fulfill its task. Potential uses of Internet intelligent agents are limited only by ones imagination.
The task of creating such an agent is not trivial. First, the information on the Internet is very random. Although Etzioni  argues for the structured web hypothesis, which states that "The information on the Web is sufficiently structured to facilitate effective Web mining", I dont believe that this is the case. There doesnt exist a unique format to display information of any kind based on the nature of information. Search-forms query results may be displayed in a table, as well as on a single line or within an application. This decision is completely up to the creator of web sites that produce these query results. Applets, scripts and server-side programs further complicate the location of and extraction of information. Second, the Internet doesnt contain any semantic information, because HTML language specifies how to display the information without specifying the meaning of information. As mentioned in , there has been some talk about ways to semantically mark HTML pages, but given the diversity of information resources on the Internet, this certainly seems to be an infeasible task. So to reason about its domain, the agent must be able to do so through its own cognitive processes. These processes must be able to match or approximate a human users ability to reason about the information found, and this fact alone makes Internet agents nontrivial. Third, the Internet is a dynamic structure; Internet pages are constantly added and removed, and the contents of these pages are not static (i.e., the contents change over time).
There is a lot of confusion about the term "softbot". Some scientists, like people at Washington University, think of softbots as a synonym for an intelligent agent. Others, like Kurki , claim that the term softbot implies less intelligence and a lack of high-level knowledge about the problem domain. Also, the classification of agents seems to be a problem as well, as we can see from  and . Scientists cannot agree on a single definition of an agent. IBM researchers state that "Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires."  Other people, such as Coen say that "Software agents are programs that engage in dialogs [and] negotiate and coordinate transfer of information."  In essence, these definitions of agents seem to coincide with actual agents that these authors have in mind and on which prototypes they are working. In my opinion, these different points of views should be unified, and a universal agent framework must be defined before a serious research effort in this field can start. These inconsistencies tend to confuse many people, including myself. A general theme reoccurs, however, through all the definitions; an agent perceives and acts upon its environment, which in turn, affects its cognitive state and future decisions. The Internet as an agents environment makes it easier for the agent to perceive it and act upon it, because of its accessibility.
In this paper, I analyze some of currently employed intelligent agents that operate on the Internet. Also, I evaluate these agents using my own test criteria. I concentrate on the internal structure of these agents (design, learning, information gathering and inference techniques), as well as on their performance, in order to point out good and bad things that may be incorporated in an Internet agent.
I will analyze the following Internet agents: ShopBot; The Internet Learning Agent; and FAQFinder.
ShopBot [5, 11] is one of the Internet agents developed by researchers at Washington University. It is a goal-oriented agent, and its purpose is to assist a human user in shopping by knowing different shopping sites and presenting information extracted from those sites to the human user. It enables users to be free from the information overload that they would encounter when accomplishing the tasks above. Although ShopBot is still only a prototype, it incorporates some very nice learning features (learning by examples, off-line learning) that could be used in other types of Internet agents.
ShopBot operates in two phases. The first is the learning phase during which ShopBot learns how to shop from certain web sites. These sites need to be specified by ShopBots creators, which certainly limits ShopBots flexibility. Another major drawback is that these sites must support search forms for ShopBot to be able to learn. Learning is done off-line, and it utilizes learning-by-example techniques. For example, when learning how to shop for CDs, ShopBot is given a set of examples that will be used to query web sites from which it learns to shop. The example set will contain some valid queries (e.g. "Pet Shop Boys") and some invalid ones containing gibberish. ShopBot will then query shopping sites with these examples and learn how to interpret search form results. ShopBots learning mechanism relies on the structure of search results, because most search forms return these results in some formatted way, rather than returning the result of a query on one line. Once ShopBot learns how to interpret search form results from all relevant web sites, it is ready for the next phase.
The second phase is real-time online comparison-shopping phase, in which ShopBot extracts information from shopping sites given a query by a human user. It simply uses learned extraction techniques to query different vendor sites, compares the prices and presents its user with a summary.
In essence, ShopBots intelligence is the ability to learn to shop from different sites. As authors point out, ShopBot relies heavily on environmental regularities that occur in this particular task of shopping. These regularities are provided by the creators of shopping sites, who strive to provide their visitors with the ability to move around different HTML pages quickly and to comprehend information quickly. The navigation regularity means locating products quickly, and the fastest way to do so is to use searchable product indexes at shopping sites. Query results usually fit a particular format that is uniform across different products; the uniformity regularity. Finally, query results usually contain whitespace and new line characters to help their users comprehend information quickly. This is referred to as the vertical separation regularity.
ShopBots ability to shop is product-independent, because all it needs to learn to shop for different products is a description of a particular domain. This includes the example set as well as location of shopping sites and specific heuristics for filling out search forms.
The natural processing ability of ShopBot is very poor. It turns out that for ShopBot to function properly, no sophisticated NLP is required. Therefore, ShopBot doesnt have the ability to understand the language, nor does it attempt to use any kind of knowledge representation for the language. Instead, it uses the structure of returned query results and some pattern matching to disambiguate queries while learning. The interaction with a human user is facilitated through the use of form fields, which are directly matched against vendors databases; thus eliminating the need for the natural language interface.
There are some drawbacks to the overall structure of ShopBot. In a sense, this agent is domain independent in one domain, and although it can learn to shop from many sites, it still only learns to shop and nothing else. The ultimate goal would be to have an agent with the shopping ability, not a shopping agent. The ability to only shop at sites that have search forms and the inability for the end-user to specify example sets are major drawbacks to ShopBot as well.
ShopBots technology is now available for the public at www.jango.excite.com. It is an Excite shopping agent called Jango. I searched for prices on CDs and specific laptops; in each case ShopBot returned quite a few matches from the shopping sites. However, when I tried to search for sound cards by choosing the category of computer peripherals, it found absolutely no matches, although vendors sites actually offered plenty of sound cards. ShopBot is very limited in its ability to evolve without being altered by its creators to provide new domain descriptions.
The Internet Learning Agent
The Internet Learning Agent (ILA) [11, 12] is another example of an Internet agent with a great potential developed at Washington University. It solves the category translation problem: understanding the output of an information source. Although it is very similar to ShopBot, it takes its learning capability one step further. ILAs task is not only to know how to access an information source and how to parse the results, but also to translate the results into its internal representation so it can integrate new information into its internal concepts. As in the case of ShopBot, ILAs intelligence is the ability to learn, given the example set.
Although ILAs prototype is not publicly available, authors provide us with some examples of how ILA may be used in practice. For instance, when ILA is given knowledge about how to query the staff directory at Washington University, it is able to interpret all the relevant characteristics of an individual queried, such as first and last name, phone number and other attributes. ILA was then able to query other universities staff directories and learn with amazing accuracy how to interpret results returned. It does rely on the uniformity of returned query results, in that if it realizes that the third field of a query result is a phone number, it assumes that all the queries about other individuals from a given information source will return their phone numbers in the third field.
Unfortunately, ILA is not very flexible. There must be an overlap between the information source and ILAs model of the information source. If ILA knows about the staff at certain university, and the information source contains only student data, ILA will not be able to learn how to deal with this particular information source. Furthermore, unless ILA is given knowledge about how to query an information source, it will not be able to extract and learn any extracted information. ILA cant operate on the information sources that it doesnt know how to query, but on those sources it knows how to query, ILA performs beyond its expectations.
ILAs natural language processing is very primitive since ILA uses pattern matching of returned results with its example set. Therefore, although ILA is able to translate information into its internal concepts, it doesnt fully comprehend this information and it can perform the translation only if it is provided with an internal template of categories into which it is supposed to perform the translation. If ILA doesnt know about phone numbers, it cant extract this information from an information source.
As I already mentioned, there is no public prototype available at this time. So we can only rely on the empirical results provided by the authors, according to which ILA was able to learn to extract and translate the information about staff directories at several universities, given that it learned this process on Washington Universitys staff directory.
FAQFinder [1, 2, 8] is the most complex and most sophisticated Internet agent I came across during my research. In essence, "... it is a natural language question answering system that uses files of frequently-asked questions as its knowledge base". FAQFinder uses semantic knowledge base WordNet  in order to understand the meaning of questions and answers. It is a complex system in that it incorporates four central methods in its architecture: 1) statistical information retrieval using SMART statistical package, 2) syntactic bottom-up chart parsing to construct a parse tree and identify constituents, 3) categorization of question-types that an end user might pose to the system, and 4) semantic concept-matching using WordNet lexical semantics information and the MOBY  thesarus.
Interaction with an end user of FAQFinder is through a question posed in natural language. FAQFinder then goes through its database of frequently-asked questions (FAQ) files and uses SMART package to identify the most likely matches, which are then presented to the human user who selects the appropriate FAQ file. After that, the users query is parsed in order to recognize the approximate structure of it (constituents). The purpose of parsing is not to disambiguate the query, but to extract structural information from it. In a sense, although the natural language processing ability is present, it is a shallow one. After parsing, FAQFinder identifies the type of question based on the question categories it knows about. For example, a question such as "What is the difference between..." will be recognized as a comparison question while "How much do I ..." will be recognized as a question dealing with quantities.
The next phase deals with matching query representation with question-answer pairs in the selected FAQ file. Quillians marker passing algorithm  is used to produce the matches, and since there may be quite a few matches produced, they need to be filtered. Parse tree is used to compare syntactic roles of the keywords parsed with question-answer pairs. So questions such as "How much is a table" and "How do I make a table" can be disambiguated, because in the former case, the table is a subject of a sentence, while in the latter the table is a preposition following the verb.
Finally, all matched question-answer pairs from a specified FAQ file are presented to a human user. If FAQFinder was able to follow matched links and retrieve question-answer pairs for an end user, it would make it much more pleasant and easier to use. This feature, however, could be implemented very easily given the current architecture of the system.
Although very impressive, FAQFinder has its drawbacks. First of all, it too relies on the structure of FAQ files. Since FAQ files are already in a question-answer format, the extraction of information is simplified. All information about the relevance of question-answer pairs is available locally. Secondly, FAQFinder is designed under the assumptions that the question part is of the most importance when query matching is performed, and that the shallow semantic knowledge of the natural language is sufficient for its purpose. Also, the way in which queries are posed can influence its performance, since there are several ways (and thus several parse trees) in which a single question can be asked. Finally, FAQ file creators will provide their users with definitions of standard concepts relevant to FAQ files, as well as with specialized sections that cover the basics of the topics relevant to these files. This information usually doesnt appear in a question-answer format, which makes FAQFinder unable to reason about this type of information.
Although the natural language processing capability of FAQFinder is shallow, it appears to be quite sufficient for this system to perform as expected. However, semantic knowledge of the language alone is not sophisticated enough, so the use of the SMART statistical package and taxonomy of question types certainly help FAQFinder to complete its tasks. Without these techniques, the system as designed, would not be able to function at all. Once (and if) the natural language understanding is solved, these helper techniques can be abandoned. Until then, they will be a necessity for adequate natural language understanding.
FAQFinder is publicly available at http://faqfinder.ics.uci.edu:8001. When I asked "How can a foreigner open up a business?", a number of responses were returned by FAQFinder, among which I found an answer to my question. It took me about three days and a lot of web browsing to find an answer to this question on my own, so obviously I was very impressed with FAQFinders performance in this case. However, when I asked "Where can I find information about taxonomy of Internet agents?", FAQFinder failed to even identify the correct FAQ files. A possible cause for this is that FAQFinder doesnt know about any FAQ files that may contain this information.
The Internet Agents Characteristics
According to Russel and Norvig, every agent has its PAGE characteristics: percepts; actions; goals; and environment. In the case of an Internet agent, these seem to be relatively easy to observe.
An Internet agents percepts, on a very low-level, are streams of zeros and ones. However, this description is not very useful, so we will move to a higher level, on which an agents percepts are web pages written in HTML language, inputs and outputs of client/server side programs (applets, scripts, etc.) and perhaps a natural language processing unit needed to interact with a user (as in the FAQFinder system). Of course, we see from the ShopBot example that such an agent can exist without an NLP unit, since input from a user can be a form that can be translated straight into a logical representation of a users desires. Needless to say, a NLP unit would make interaction with agents much more user-friendly.
An agents actions depend on the nature of the task that it needs to accomplish, as well as on the domain in which the agent operates. An agent that monitors a certain server for appearance of some user, simply locates the server, and then waits until the desired person logs on. A research agent would "search" the Internet, following all the relevant links and collecting information along the way. One common feature of Internet agents is that they all should provide their sender with some sort of feedback, once the task is finished, and whether agents accomplished the task or not. In the first example, when a desired user logs on, the agent should inform its sender about this. In the second example, once a report is finished, it should be sent to the person who requested it. Since internet agents are domain and task dependent, this makes the task of creating a general-purpose internet agent almost impossible. One solution would be to have different knowledge bases for different domains, and a uniform way of using this domain knowledge in the tasks that agent performs. Separation of knowledge and reasoning is still the key point. Another solution would be to have different specialized agents for each problem/domain pair, and have these agents work together. However, this implies the need for the uniform framework for Internet agents, since without it, the interaction between specialized agents would be very hard, if not impossible to achieve.
An agents goals are varied, and these are specified by the agents user. Inherently, all Internet agents are goal-oriented. Before the agent is dispatched to complete its goal, it needs to have domain knowledge that can be used to accomplish this goal. This clearly implies that an Internet agent cant get away without learning ability, to learn how to accumulate knowledge needed to accomplish its goal. The agent must employ autonomous learning, for without it, the agents designer would have to hard-code the knowledge about each domain the agent is capable of operating in; this would make the agents extremely inert and inflexible. An agents ability to recognize intermediate (sub)goals and act to satisfy them would be a helpful feature as well.
From the systems examined above, we see that the ability to learn, although inevitable, is not adequate for an Internet agent to be flexible. In each of the mentioned systems, a designer must provide the system with a way to learn about its domain. It would be helpful if the agent had some sort of meta-learning knowledge: to be able to learn how to perform its tasks without user interference. The task of accomplishing this is a very hard one. If this is not possible to design, at least end users of these systems should be able to instruct the agent how to learn in a particular domain. The ability to specify meta-learning knowledge is useful only if users find specifying this knowledge easier than carrying out the agents activities themselves.
An agents environment is accessible, and unfortunately, nondeterministic, because of the size of the Internet. The agent must be able to reason about its environment in order to constrain its search, because not all the links the agent may follow are relevant, given a specific task. Also, the environment is dynamic in that it constantly changes over time, so an Internet agent must be able to reason about its domain and recover from failures (links in the agents knowledge base may die out) in real time.
As Franklin  points out, some agents are actually multi-agents, in that they are composed of subagents who are agents themselves. Internet agents could be thought of as multi-agents, if they can collaborate. Why send out only one research agent to prepare report, but instead send out a number of them to different places, and have them exchange their knowledge when they meet? In this case, an Internet agent would be a collection of independent subagents, each with its own ability to exist, perceive and change its environment. The introduction of this multi-agent structure would require proper means to coordinate separate agents, for without coordination, subagents could perhaps never meet and learn from each other. Needless to say, this task would be very difficult to achieve, as we can see from . It turns out that collaborative agents cant rely on local control laws only, thus some sort of global control laws must be investigated further.
One must be very careful when designing the processes of goal identification and planning for Internet agents. It must be assured that the goals, as specified, could be satisfied to some degree. For example, the research agent could collect some information about a given topic, although not completely satisfying its goal. In this case, the agent could inform its sender that the goal was not satisfied completely, but still display information it had gathered. The agent that follows a yes-or-no goal satisfaction principle could accomplish most of the task, but fail to report it since it didnt completely fulfill its goal. There are many techniques that could be used to assure that an agents goals could be satisfied. One of them  uses the concept of goal transformation in continuous planning systems and it seems suitable for the Internet agent domain. Since the Internet is constantly changing, the agents goals may need to be changed in real time. Goal transformation techniques would assure that an agent can map its goal to a more general one (when lacking specific knowledge about satisfying a goal) or a more specific one (when an agent possesses additional information that can help categorize the goal further). This would make Internet agents more flexible, since their goals could adapt to the dynamic environment and the knowledge they have available.
I will now describe a potential high-level design of an Internet agent. But before designing can even begin, there is a very important issue one must take into consideration: whether there is some advantage to having distributed Internet agents. I use the word "distributed" in a sense that these agents can physically transfer themselves from one Internet site to another. Although distributed Internet agents could certainly be implemented given the current state-of-the-art technologies, I argue that this is neither necessary, nor desirable. When a person "surfs the web", they are usually using a computer in a fixed location, and they are retrieving information from the Internet rather than physically transferring their browser from one site to another. So the most natural way of implementing an Internet agent would be to design it so that it can mimic the humans ability to "surf the web" from one location. Furthermore, it is highly unlikely that distributed Internet agents would be able to transport themselves to any server they would need to visit, given the current Internet security concerns. Most servers are set up behind a firewall that would prevent distributed agents from accessing them. On the other hand, these types of servers will usually allow a client to request and obtain the information from that server. If there are multiple agents collaborating on a common task, it would be much easier to exchange information among them if they were at the same physical location. Introducing distributed agents would have a great impact on the complexity of collaborating techniques. Thus having "surfer" agents rather than distributed ones certainly seems like a reasonable approach.
Given the goal-oriented nature of Internet agents, it is clear that the early phases in an agents attempts to carry out specified tasks are goal identification and planning. Goal identification is concerned with transformation of an end users requests into some kind of an internal representation of an agents goals. Needless to say, this phase is perhaps the most crucial one, since if the goal is not properly identified, our agent could carry out meaningless tasks that would not comply with end users request. After the goal is identified, an agent must construct a plan, which when carried out would ultimately lead to the satisfaction of the agents goal. To make our agent more general, it should have the ability to acquire knowledge about satisfying a given goal when a plan doesnt exist that would instruct it to do so.
Since Internet agents "surf the web", a navigational unit is a necessity. This unit should be independent of other parts of the agent and should serve the purpose of performing the extraction of data from the Internet. Of course, the decision about which Internet site to visit and what kind of information to extract is still needed. However, by keeping the navigational unit as a separate component, the agents navigational ability will be modular and would allow incremental changes to this module when new types of information become available on the Internet.
Collaboration of multiple Internet agents is certainly worth exploring. It is very difficult to know how many or what kinds of agents will be needed to perform a specific task before an agent completed the goal identification and planning phases. Thus, resource allocation for Internet agents should follow from the plan. After the plan is constructed, an agent could dynamically assemble a team of various Internet agents to collaborate on a given task. A more flexible option would be to have an agent create teams of other agents to carry out the plan. This would require an agent to possess meta-agent knowledge, to be able to reason about the other agents that are being created. Meta-agent knowledge would also be useful for an agent in analyzing and evaluating its performance.
Internet agents and NLP
NLP is an important issue for Internet agents. Not only that the ability to understand the natural language would cause Internet agents to appear more user friendly when interacting with a human user, but they would also help in understanding the information that these agents acquire. Work in this area still needs to be done, as from the previously examined implemented agents, we see that all of them, with the exception of FAQFinder, lack any kind of natural language understanding capability. One may argue that when ShopBot extracts the information from vendor sites, it is performing some sort of NLP. However, the text in natural language is simply pattern-matched with a prepared template, so there is no need to specify an agents internal concepts to capture the meaning of extracted textual information. Until the NLP problem is solved, Internet agents would have to deal with some other techniques for understanding the natural language. FAQFinder provides us with an interesting technique, that uses statistical and categorization knowledge to support its shallow NLP capabilities. This form of NLP suffices for FAQFinder because of the structure of FAQ files, but it fails to serve as a tool to process textual information from files whose structure is different. But FAQFinder indicates that the shallow knowledge for NLP could be sufficient for most of the tasks, provided that appropriate helping techniques are used to process this knowledge further.
An internet agent faces the same difficulties as any web data-mining system, as far as textual information is concerned. As long as there is some structure to the information that is being extracted, the NLP task can be greatly simplified by utilizing the information structure as a template into which particular information will be incorporated. However, in situations in which we cannot rely on the structured web hypothesis, we must use pure NLP techniques in order to understand the information. Because state-of-the-art of NLP is mamzingly poor at the present time, the intelligent Internet agents are and will remain to be not intelligent enough to understand the information that they are being presented with.
This paper examines currently available Internet agents and discusses some of the general characteristics of these agents, including PAGE charcteristic. The evaluation of existing agents showed that they are extremely domain-dependant and that their intelligence is ability to learn alone. Goal identification and planning are identified as initial phases during the design of any Internet agent. NLP techniques are not effectively used in the Internet agent framework, and even when they are used, they are not sufficient for effective NLP, because they lack sound theories to support them.
 Burke, R., Hammond, K. And Cooper, E. 1994. Knowledge-based information retrieval from semi-structured text. In AAAI Workshop on Internet-based Information Systems, pp. 9-15.
 Burke, R., Hammond, K. And Kulyukin, V. 1997. Question Answering from Frequently-Asked Question Files: Experiences with the FAQ Finder System. Technical report TR-97-05, University Of Chicago, Computer science department
 Coen, M. 1994. SodaBot: A Software Agent Environment and Construction System. MIT AI Lab Technical Report 1493, June 94.
 Cox, M. And Veloso, M. 1998. Goal Transformations in Continuous Planning. In M. desJardins (Ed.), Proceedings of the 1998 AAAI Fall Symposium on Distributed Continual Planning (pp. 23-30). Menlo Park, CA: AAAI Press / The MIT Press.
 Doorenbos, R., Etzioni, O. And Weld, D. 1994. A scalable comparison-shopping agent for the world-wide web. Technical report 96-01-03, University of Washington, Department of computer science and engineering.
 Etzioni, O. 1996. The World Wide Web: quagmire or gold mine? Communications of ACM, Nov. 96
 Franklin, S. And Graesser, A. 1996 Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents Proceedings of the Third International Workshop on Agent Theories, Architectures and Languages. Springer-Verlag.
 Hammond, K., Burke, R. And Schmitt, K. 1994. A Case-Based Approach to Knowledge Navigation. In AAAI Workshop on Knowledge Discovery in Databases.
 Kurki, Teppo. Internet Agents. Tik-76.720 Agent seminar paper
 Parker, L. 1992. Local versus Global Control Laws for Cooperative Agent Teams. MIT AI Lab Memo No. 1357, March 92.
 Perkowitz, M. et al. Learning to understand information on the Internet: An example-based approach. Journal of Intelligent Information Systems, date unknown.
 Perkowitz, M. and Etzioni, O. 1996. Category Translation: Learning to understand information on the Internet In Proc. 15th Int. Joint Conf. On AI, pp. 930-936.
Appendix: Related papers
1. Chess, D., Harrison, C. And Kershenbaum, A. 1995. Mobile Agents: Are They a Good Idea? IBM Research Report RC 19887, March 95.
2. Etzioni, O. And Weld, D. 1994 A Softbot-Based Interface to the Internet. Communications of ACM, 37(7):pp.72-76, July 94.
3. Etzioni, O. And Weld, D. 1995 Intelligent Agents on the Internet: Fact, Fiction, and Forecast. Available via ftp from pub/ai/ at ftp.cs.washington.edu.
4. Etzioni, O. and Weld, D. 1995. "Intelligent Agents on the Internet - Fact, Fiction, and Forecast," IEEE Expert, no. 4 (August), pp. 44-49.
5. Grosof, B. 1997. Building Commercial Agents: An IBM Research Perspective. IBM Research Report RC 20835, May 97.
6. Grosof, B. Et al. 1995. Reusable Architecture for Embedding Rule-based Intelligence in Information Agents. IBM Research Report RC 20305, December 95.
7. Knoblock, C. and Ambite, J. 1997. Agents for Information Gathering Software Agents, J. Bradshaw ed., AAAI/MIT Press, Menlo Park, CA.
8. Mataric, M. 1991. A Comparative Analysis of Reinforcement Learning Methods. MIT AI Lab Memo No. 1322, October 91.
9. Safra, S. And Tennenholtz, M. 1994. On Planning while Learning. Journal of Artificial Intelligence Research 2, pp. 111-129.