Sunday, 29 April 2018

Multiagent Soft Q-Learning. (arXiv:1804.09817v1 [cs.AI])

Policy gradient methods are often applied to reinforcement learning in continuous multiagent games. These methods perform local search in the joint-action space, and as we show, they are susceptable to a game-theoretic pathology known as relative overgeneralization. To resolve this issue, we propose Multiagent Soft Q-learning, which can be seen as the analogue of applying Q-learning to continuous controls. We compare our method to MADDPG, a state-of-the-art approach, and show that our method achieves better coordination in multiagent cooperative tasks, converging to better local optima in the joint action space.



from cs updates on arXiv.org https://ift.tt/2JtivYS
//

0 comments:

Post a Comment