Skip to main content

State Representation Learning for Effective Deep Reinforcement Learning

Jian Zhao, Wengang Zhou, Tianyu Zhao, Yun Zhou, Houqiang Li

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:20
08 Jul 2020

Recent years have witnessed the great success of deep reinforcement learning (DRL) on a variety of vision games. Although DNN has demonstrated strong power in representation learning, such capacity is under-explored in most DRL works whose focus is usually on optimization solvers. In fact, we discover that the state feature learning is the main obstacle for further improvement of DRL algorithms. To address this issue, we propose a new state representation learning scheme with our Adjacent State Consistency Loss (ASC Loss). The loss is defined based on the hypothesis that there are fewer changes between adjacent states than that of far apart ones, since scenes in videos generally evolve smoothly. In this paper, we exploit ASC loss as an assistant of RL loss in the training phase to boost the state feature learning. We conduct evaluation on Atari games and MuJoCo continuous control tasks, which demonstrates that our method is superior to OpenAI baselines.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00