hide
Free keywords:
-
Abstract:
Finding an optimal policy in a reinforcement learning (RL) framework with
continuous state and action spaces is challenging. Approximate solutions
are often inevitable. GPDP is an approximate dynamic programming algorithm
based on Gaussian process (GP) models for the value functions. In
this paper, we extend GPDP to the case of unknown transition dynamics.
After building a GP model for the transition dynamics, we apply GPDP
to this model and determine a continuous-valued policy in the entire state
space. We apply the resulting controller to the underpowered pendulum swing up. Moreover, we compare our results on this RL task to a nearly optimal discrete DP solution in a fully known environment.