Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Solving Continuous Control via Q-learning

Tim Seyde · Peter Werner · Wilko Schwarting · Igor Gilitschenski · Martin Riedmiller · Daniela Rus · Markus Wulfmeier

Keywords: [ Reinforcement Learning ] [ reinforcement learning ] [ continuous control ] [ learning efficiency ]


Abstract:

While there has been substantial success for solving continuous control with actor-critic methods, simpler critic-only methods such as Q-learning find limited application in the associated high-dimensional action spaces. However, most actor-critic methods come at the cost of added complexity: heuristics for stabilisation, compute requirements and wider hyperparameter search spaces. We show that a simple modification of deep Q-learning largely alleviates these issues. By combining bang-bang action discretization with value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL), this simple critic-only approach matches performance of state-of-the-art continuous actor-critic methods when learning from features or pixels. We extend classical bandit examples from cooperative MARL to provide intuition for how decoupled critics leverage state information to coordinate joint optimization, and demonstrate surprisingly strong performance across a variety of continuous control tasks.

Chat is not available.