Skip to yearly menu bar Skip to main content


Workshop

Reincarnating Reinforcement Learning

Rishabh Agarwal · Ted Xiao · Yanchao Sun · Max Schwarzer · Susan Zhang

AD1

Learning “tabula rasa”, that is, from scratch without much previously learned knowledge, is the dominant paradigm in reinforcement learning (RL) research. However, learning tabula rasa is the exception rather than the norm for solving larger-scale problems. Additionally, the inefficiency of tabula rasa RL typically excludes the majority of researchers outside certain resource-rich labs from tackling computationally demanding problems. To address the inefficiencies of tabula rasa RL and help unlock the full potential of deep RL, our workshop aims to bring further attention to this emerging paradigm of reusing prior computation in RL, discuss potential benefits and real-world applications, discuss its current limitations and challenges, and come up with concrete problem statements and evaluation protocols for the research community to work on. Furthermore, we hope to foster discussions via panel discussions (with audience participation), several contributed talks and by welcoming short opinion papers in our call for papers.

Chat is not available.
Timezone: America/Los_Angeles

Schedule