Agents that Learn from Other Agents

This is the on-line proceedings of the workshop on Agents that Learn from Other Agents held as part of the 1995 International Machine Learning Conference. Led by an invited talk by Tom Mitchell of Carnegie-Mellon University, eleven reports about current research on this topic were presented.

Introduction

There has been a growing trend in machine learning toward learning methods that involve interacting with other agents. One such interaction is via advice-taking (Suddarth & Holden 1991), which may well turn out to be the most efficient method for building software-agent architectures. Instruction is steadily increasing in popularity as a method for agent learning (e.g., Gordon & Subramanian 1993, Huffman & Laird 1993, Lin 1993, Maclin & Shavlik 1994, Noelle & Cotrell 1994, Tecuci, Hieb, Hille, & Pullen 1994). Meanwhile, there is also an active interest in agents that learn from observation (e.g., Iba 1991, Lin 1993), as well as agents that communicate and learn from each other (e.g., Etzioni & Weld 1994, Lashkari, Metral, & Maes 1994, Tan 1993). Furthermore, in the COLT community there have been a number of papers on team learning (e.g., Daley, Kalyanasundaram, & Velauthapillai 1993). For these reasons, the main focus of this workshop was on learning from instruction or observation, rather than the environment. The instructors might be humans or other automated agents. (References to the above-mentioned articles appear below.)

Organizing Committee

Proceedings

(The schedule is currently on-line.)

Theory of Team Learning

Learning from Instruction

Learning from Observation

Knowledge Acquisition and Refinement


Bibliography for the Introduction


Last modified: Fri Jun 16 16:45:58 1995 by Jude Shavlik
shavlik@cs.wisc.edu