Reincarnating RL

This inaugural workshop at ICLR 2023 (in-person) aims to bring further attention to the emerging paradigm of reusing prior computation in RL, which we refer to as reincarnating RL. Specifically, we plan to discuss potential benefits of reincarnating RL, its current limitations and associated challenges, and come up with concrete problem statements and evaluation protocols for the research community to work on.

Tabula rasa RL vs. Reincarnating RL. While tabula rasa RL focuses on learning from scratch, RRL is based on the premise of reusing prior computational work (e.g., prior learned agents) when training new agents or improving existing agents. Source: Google AI Blog .

Why? Reusing prior computation can further democratize RL research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. Furthermore, real-world RL use cases are common in scenarios where prior computational work is available, making reincarnating RL important to study. Additionally, reincarnating RL can enable a benchmarking paradigm where researchers continually improve and update existing trained agents, especially on problems where improving performance has real-world impact. However, except for some large-scale RL efforts with ad hoc approaches, the RL community has only recently started focusing on reincarnating RL as a research problem in its own right.

Call for papers

Submission Site:


Avishkar Bhoopchand
Joseph Lim
Korea Advanced Institute of Science and Technology (KAIST)
Furong Huang
University of Maryland
Anna Goldie
Anthropic / Stanford
Sergey Levine
UC Berkeley


Challenges & Open Problems in Reusing Prior Computation

Jeff Clune
Marc G. Bellemare
Google Research, Brain Team
Joseph Lim
Korea Advanced Institute of Science and Technology (KAIST)
Jim (Linxi) Fan
Nvidia AI
Anna Goldie
Anthropic / Stanford University
Furong Huang
University of Maryland
Avishkar Bhoopchand

RRL Benchmarking Track Supporters

Please see call for papers for more details about the special track on benchmarking reincarnating RL.

Nathan Lambert
Hugging Face
Costa Huang
CleanRL, Drexel University
Antonin Raffin
SB3, German Aerospace Center


Rishabh Agarwal
Google Brain
Ted Xiao
Google Robotics
Yanchao Sun
University of Maryland, College Park
Susan Zhang
Meta AI
Max Schwarzer
Mila, University of Montreal

For any queries, please reach out to the organizers at .