Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Learning State-Action Basis Functions For Hierarchical MD Ps

This work introduces a new approach to action-value function approximation based on learning basis functions from a spectral decomposition of the state action manifold. This is an extension of previous work on using Laplacian bases for value function approximation by using the actions of the agent as part of the representation when creating basis functions. The outcome is that the approach results in a nonlinear learned representation particularly suited to approximating action-value functions. We discuss two techniques to create these state-action graphs and show examples of the basis functions created. We show that these graphs have a greater expressive power and have better performance over state-based Laplacian basis functions in domains modeled as Semi-Markov Decision Processes (SMDPs).

Edit - History - Print - Recent Changes - Search
Page last modified on March 08, 2007, at 09:34 PM