Policy Augmentation: An Exploration Strategy For Faster Convergence Of Deep Reinforcement Learning Algorithms
Arash Mahyari
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:45
Despite advancements in deep reinforcement learning algorithms, developing an effective exploration strategy is still an open problem. Most existing exploration strategies either are based on simple heuristics, or require the model of the environment, or train additional deep neural networks to generate imagination-augmented paths. In this paper, a revolutionary algorithm, called \textit{Policy Augmentation}, is introduced. \textit{Policy Augmentation} is based on a newly developed inductive matrix completion method. The proposed algorithm augments the values of unexplored state-action pairs, helping the agent take actions that will result in high-value returns while the agent is in the early episodes. Training deep reinforcement learning algorithms with high-value rollouts leads to the faster convergence of deep reinforcement learning algorithms. Our experiments show the superior performance of \textit{Policy Augmentation}. The code can be found at: https://github.com/arashmahyari/PolicyAugmentation.
Chairs:
Seung-Jun Kim