Agents that Learn from Other Agents
This is the on-line proceedings of the
workshop on Agents that Learn from Other Agents
held as part of the
1995 International Machine Learning Conference.
Led by an invited talk by
Tom Mitchell of Carnegie-Mellon University,
eleven reports about current research on this topic were presented.
Introduction
There has been a growing trend in machine learning toward learning
methods that involve interacting with other agents.
One such interaction is via advice-taking
(Suddarth & Holden 1991),
which may well turn out to be the
most efficient method for building software-agent architectures. Instruction is
steadily increasing in popularity as a method for agent learning (e.g.,
Gordon & Subramanian 1993,
Huffman & Laird 1993,
Lin 1993,
Maclin & Shavlik 1994,
Noelle & Cotrell 1994,
Tecuci, Hieb, Hille, & Pullen 1994).
Meanwhile, there is also
an active interest in agents that learn from observation (e.g.,
Iba 1991,
Lin 1993),
as well as agents that communicate and learn from each other
(e.g.,
Etzioni & Weld 1994,
Lashkari, Metral, & Maes 1994, Tan 1993).
Furthermore, in the COLT community there have been a number of papers
on team learning (e.g.,
Daley, Kalyanasundaram, & Velauthapillai 1993).
For these reasons, the main focus of
this workshop was on learning from instruction or observation, rather than the
environment. The instructors might be humans or other automated agents.
(References to the above-mentioned articles appear below.)
Organizing Committee
Proceedings
(The schedule is currently on-line.)
Theory of Team Learning
Learning from Instruction
-
Combining learning from instruction with
recovery from incorrect knowledge, by
Douglas Pearson, University of Michigan, and
Scott Huffman, Price Waterhouse.
-
Conflict resolution in advice taking and
instruction for learning agents, by
Benjamin Grosof, IBM Watson Research.
-
Learning from an automated training agent, by
Jeffrey Clouse, University of Massachusetts.
- Learning from instruction and experience in competitive
situations, by
Jude Shavlik and
Richard Maclin, University of Wisconsin.
Learning from Observation
Knowledge Acquisition and Refinement
Bibliography for the Introduction
- S. Suddarth & A. Holden.
Symbolic-neural systems and the use of hints in developing complex systems.
International Journal of Man-Machine Studies, 35:291-311, 1991.
- D. Gordon & D. Subramanian.
A multistrategy learning scheme for agent knowledge acquisition.
Informatica 17:331-346, 1993.
- S. Huffman & J. Laird.
Learning procedures from interactive natural language instructions.
Procs: 1993 Machine Learning Conf.
- L. Lin.
Scaling up reinforcement learning for robot control.
Procs: 1993 Machine Learning Conf.
- G. Tecuci, M. Hieb, D. Hille, & J. Pullen.
Building adaptive autonomous agents for adversarial domains.
Procs: AAAI-94 Fall Symposium.
- R. Maclin & J. Shavlik.
Incorporating advice into agents that learn from reinforcements.
Procs: AAAI-94.
- D. Noelle & G. Cottrell.
Integrating induction and instruction: Connectionist advice taking.
Procs: AAAI-94.
- W. Iba.
Learning to classify observed motor behavior.
Procs: IJCAI-91.
- O. Etzioni & D. Weld.
A softbot-based interface to the Internet.
Communications of the ACM, 37(7),
special issue on Intelligent Agents, 1994.
- Y. Lashkari, M. Metral, & P. Maes.
Collaborative interfact agents.
Procs: AAAI-94.
- M. Tan. Multi-agent reinforcement learning: Independent vs.
cooperative agents. Procs: 1993 Machine Learning Conf.
- R. Daley, B. Kalyanasundaram, & M. Velauthapillai.
Capabilities of fallible finite learning.
Procs: COLT-93.
Last modified: Fri Jun 16 16:45:58 1995 by Jude Shavlik
shavlik@cs.wisc.edu