Learning From Demonstration Human Feedback And Environmental Rewards
Reinforcement learning algorithms are able to find near-optimal solutions, but often suffer from poor initial performance and long learning times. Furthermore, an external environmental reward is not always available. This talk will discuss two sets of work on leveraging human knowledge to learn sequential decision tasks. First, we will discuss how human-demonstrated data can be leveraged to improve autonomous learning. Second, we will show how discrete human feedback can be used to learn complex tasks in the absence of an external reward signal. The talk will conclude with a selection of open problems and future research directions.
Dr. Matthew E. Taylor is an assistant professor at Washington State University in the school of electrical engineering and computer science. He holds the Allred Distinguished Professorship in Artificial Intelligence, has won an NSF CAREER award, and is on the IFAAMAS board of directors. He currently supervises more than 10 graduate students in the Intelligent Robot Learning Laboratory. Current research interests include intelligent agents, multi-agent systems, reinforcement learning, transfer learning, and robotics.