MLFL Wiki |
Main /
Learning Multimap Robot Control Policies From DemonstrationMany robot control policies can be cast as finite state machines, where a task divides into subtasks that are combined to achieve the overall goal. Learning such a task from demonstration involves learning the number of subtasks, their individual policies, how to switch between them and optionally improving performance. I will talk about an approach to learning the subtasks and their policies. Demonstration data takes the form of a multimap, where states may lead to multiple actions. That is, the distribution over correct actions, given a state estimate, is multimodal. Multimaps may occur due to hidden state or perceptual aliasing. Framing learning as a problem in multimap regression I will present experiments with ROGER (Realtime Overlapping Gaussian Expert Regression), an incremental infinite mixture of experts approach. ROGER addresses model selection (choosing the number of subtasks), gating (assigning data to subtasks) and policy learning (for each subtask). Formulated in a sparse, incremental fashion, ROGER is thus suitable for interactive, mixed-initiative learning and may be combined with other work to perform full FSM learning. Experiments presented will be in the domain of robot soccer. |