Wednesday, 3 January 2018

f-Divergence constrained policy improvement. (arXiv:1801.00056v1 [cs.LG])

To ensure stability of learning, state-of-the-art generalized policy iteration algorithms augment the policy improvement step with a trust region constraint bounding the information loss. The size of the trust region is commonly determined by the Kullback-Leibler (KL) divergence, which not only captures the notion of distance well but also yields closed-form solutions. In this paper, we consider a more general class of f-divergences and derive the corresponding policy update rules. The generic solution is expressed through the derivative of the convex conjugate function to f and includes the KL solution as a special case. Within the class of f-divergences, we further focus on a one-parameter family of {\alpha}-divergences to study effects of the choice of divergence on policy improvement. Previously known as well as new policy updates emerge for different values of {\alpha}. We show that every type of policy update comes with a compatible policy evaluation resulting from the chosen f-divergence. Interestingly, the mean-squared Bellman error minimization is closely related to policy evaluation with the Pearson $\chi^2$-divergence penalty, while the KL divergence results in the soft-max policy update and a log-sum-exp critic. We carry out asymptotic analysis of the solutions for different values of {\alpha} and demonstrate the effects of using different divergence functions on a multi-armed bandit problem and on common standard reinforcement learning problems.



from cs updates on arXiv.org http://ift.tt/2EHJ9vp
//

Related Posts:

0 comments:

Post a Comment