English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Predictive Representations for Policy Gradient in POMDPs

MPS-Authors
There are no MPG-Authors in the publication available
External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Boularias, A., & Chaib-draa, B. (2009). Predictive Representations for Policy Gradient in POMDPs. In A. Danyluk, L. Bottou, & M. Littman (Eds.), ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning (pp. 65-72). New York, NY, USA: ACM Press.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-C4A1-C
Abstract
We consider the problem of estimating
the policy gradient in Partially Observable
Markov Decision Processes (POMDPs) with
a special class of policies that are based
on Predictive State Representations (PSRs).
We compare PSR policies to Finite-State
Controllers (FSCs), which are considered as a
standard model for policy gradient methods
in POMDPs. We present a general Actor-
Critic algorithm for learning both FSCs and
PSR policies. The critic part computes a
value function that has as variables the parameters
of the policy. These latter parameters
are gradually updated to maximize the value
function. We show that the value function
is polynomial for both FSCs and PSR policies,
with a potentially smaller degree in the
case of PSR policies. Therefore, the value
function of a PSR policy can have less local
optima than the equivalent FSC, and consequently,
the gradient algorithm is more likely
to converge to a global optimal solution.