{"id":173,"date":"2023-01-25T17:03:03","date_gmt":"2023-01-25T17:03:03","guid":{"rendered":"https:\/\/todaysainews.com\/index.php\/2023\/01\/25\/reincarnating-reinforcement-learning-google-ai-blog\/"},"modified":"2025-04-27T07:35:55","modified_gmt":"2025-04-27T07:35:55","slug":"reincarnating-reinforcement-learning-google-ai-blog","status":"publish","type":"post","link":"https:\/\/todaysainews.com\/index.php\/2023\/01\/25\/reincarnating-reinforcement-learning-google-ai-blog\/","title":{"rendered":"Reincarnating Reinforcement Learning \u2013 Google AI Blog"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"post-body-7557864009939969171\">\n<span class=\"byline-author\">Posted by Rishabh Agarwal, Senior Research Scientist, and Max Schwarzer, Student Researcher, Google Research, Brain Team<\/span><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEj0b-yGyOyTPS2exZHwK8RnAbJH4fsnBI9RLf02LDILhgdIoveBa8kIo4F8PJRnenUeJLKs0zJtsr6_lW5O7hIjcupoR5E-4eNKBQ2bYG7G25mqkVf-fZLyN6Wh5kjU0FSaRDELXKJ1emPN0SilgCkVzDF-wh-qaLsesKU2VrJw78wpyvQ675SK3O_Q0Q\/s750\/RRL-small.gif\" style=\"display: none;\"\/><\/p>\n<p>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Reinforcement_learning\">Reinforcement learning<\/a> (RL) is an area of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Machine_learning\">machine learning<\/a> that focuses on training intelligent agents using related experiences so they can learn to solve decision making tasks, such as <a href=\"https:\/\/ai.googleblog.com\/2020\/04\/an-optimistic-perspective-on-offline.html\">playing video games<\/a>, <a href=\"http:\/\/rdcu.be\/cbBRc\">flying stratospheric balloons<\/a>, and <a href=\"https:\/\/ai.googleblog.com\/2020\/04\/chip-design-with-deep-reinforcement.html\">designing hardware chips<\/a>. Due to the generality of RL, the prevalent trend in RL research is to develop agents that can efficiently learn <em><a href=\"https:\/\/en.wikipedia.org\/wiki\/Tabula_rasa\">tabula rasa<\/a><\/em>, that is, from scratch without using previously learned knowledge about the problem. However, in practice, tabula rasa RL systems are typically the exception rather than the norm for solving large-scale RL problems. Large-scale RL<strong> <\/strong>systems, such as <a href=\"https:\/\/openai.com\/five\/\">OpenAI Five<\/a>, which achieves human-level performance on <a href=\"https:\/\/en.wikipedia.org\/wiki\/Dota_2\">Dota 2<\/a>, undergo multiple design changes (e.g., algorithmic or architectural changes) during their developmental cycle. This modification process can last months and necessitates incorporating such changes without re-training from scratch, which would be prohibitively expensive.\u00a0<\/p>\n<p><a name=\"more\"\/><\/p>\n<p>\nFurthermore, the inefficiency of tabula rasa RL research can exclude many researchers from tackling computationally-demanding problems. For example, the quintessential benchmark of training a deep RL agent on 50+ Atari 2600 games in <a href=\"https:\/\/arxiv.org\/abs\/1207.4708\">ALE<\/a> for 200M frames (the standard protocol) requires 1,000+ GPU days. As deep RL moves towards more complex and challenging problems, the computational barrier to entry in RL research will likely become even higher.\n<\/p>\n<p>\nTo address the inefficiencies of tabula rasa RL, we present \u201c<a href=\"https:\/\/agarwl.github.io\/reincarnating_rl\/\">Reincarnating Reinforcement Learning: Reusing Prior Computation To Accelerate Progress<\/a>\u201d at <a href=\"https:\/\/openreview.net\/forum?id=t3X5yMI_4G2\">NeurIPS 2022<\/a>. Here, we propose an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. While some sub-areas of RL leverage prior computation, most RL agents are still largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. We have also released our <a href=\"https:\/\/agarwl.github.io\/reincarnating_rl\/\">code<\/a> and <a href=\"https:\/\/colab.research.google.com\/drive\/1ktlNni_vwFpFtCgUez-RHW0OdGc2U_Wv?usp=sharing\">trained agents<\/a> to enable researchers to build on this work.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjw7y_DM1drfYM19tqNiaKfxa6L-clo6_QlDS9pWS70TfajO4A8PpMfuRPVEhFsTcrokjCctGvit_QtWji5vsgI3byl9rP1MH6BkFna0MbxT2RgDAnMjhGePaP3v77Nkw6VmrPg-q5-alItMlyiUWHZU2TyA6AllLmobmQGTu1g6MKkXBjcgA5oPFlIsg\/s1398\/RRL.gif\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"725\" data-original-width=\"1398\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjw7y_DM1drfYM19tqNiaKfxa6L-clo6_QlDS9pWS70TfajO4A8PpMfuRPVEhFsTcrokjCctGvit_QtWji5vsgI3byl9rP1MH6BkFna0MbxT2RgDAnMjhGePaP3v77Nkw6VmrPg-q5-alItMlyiUWHZU2TyA6AllLmobmQGTu1g6MKkXBjcgA5oPFlIsg\/s16000\/RRL.gif\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">Tabula rasa RL vs. Reincarnating RL (RRL). While tabula rasa RL focuses on learning from scratch, RRL is based on the premise of reusing prior computational work (e.g., prior learned agents) when training new agents or improving existing agents, even in the same environment. In RRL, new agents need not be trained from scratch, except for initial forays into new problems.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Why Reincarnating RL? <\/h2>\n<p>\nReincarnating RL (RRL) is a more compute and sample-efficient workflow than training from scratch. RRL can democratize research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. Furthermore, RRL can enable a benchmarking paradigm where researchers continually improve and update existing trained agents, especially on problems where improving performance has real-world impact, such as <a href=\"https:\/\/www.nature.com\/articles\/s41586-020-2939-8.epdf?sharing_token=JYZ0ZlvEivoTq9RkGfWPQtRgN0jAjWel9jnR3ZoTv0Mh-6OgaxBwChMnw6EOI9v07nMOMJGBruSSDc8BFPfwkG1QQ0R-p9CwTuKA6ZO41aQ8e-Y-ffoWrsFX1cztOZfL5cL1mwXL8qU58Plz4GAzu_SLyawhPWS5QV6GieUEDig%3D\">balloon navigation<\/a> or <a href=\"https:\/\/ai.googleblog.com\/2020\/04\/chip-design-with-deep-reinforcement.html\">chip design<\/a>. Finally, real-world RL use cases will likely be in scenarios where prior computational work is available (e.g., existing deployed RL policies).\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjm5Y13mKOIt4y1ni6M7gJhOcwxKPQuu3kpQMnXxG-SunbQfFhIHcVzyikw5OCYt1U_9Fn0-zKoLSHhvyUD-Q4c8DhKuTBzrSvIZQzsmp-Isam4HitAJZFNKsrd96DvVJ4e5I-Mhpsc9xV-SUSM1dQ7wGaonHmvJYLDQpYlrqO5GqQc40rsL4ROeyb-cA\/s960\/image3.png\" style=\"margin-left: auto; margin-right: auto;\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" data-original-height=\"540\" data-original-width=\"960\" height=\"360\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjm5Y13mKOIt4y1ni6M7gJhOcwxKPQuu3kpQMnXxG-SunbQfFhIHcVzyikw5OCYt1U_9Fn0-zKoLSHhvyUD-Q4c8DhKuTBzrSvIZQzsmp-Isam4HitAJZFNKsrd96DvVJ4e5I-Mhpsc9xV-SUSM1dQ7wGaonHmvJYLDQpYlrqO5GqQc40rsL4ROeyb-cA\/w640-h360\/image3.png\" width=\"640\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">RRL as an alternative research workflow. Imagine a researcher who has trained an agent A<sub>1<\/sub> for some time, but now wants to experiment with better architectures or algorithms. While the tabula rasa workflow requires retraining another agent from scratch, RRL provides the more viable option of transferring the existing agent A<sub>1<\/sub> to another agent and training this agent further, or simply fine-tuning A<sub>1<\/sub>.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\nWhile there have been some ad hoc large-scale reincarnation efforts with limited applicability, e.g., <a href=\"https:\/\/arxiv.org\/abs\/1912.06680\">model surgery in Dota2<\/a>, <a href=\"https:\/\/openai.com\/blog\/solving-rubiks-cube\/\">policy distillation in Rubik\u2019s cube<\/a>, <a href=\"https:\/\/www.nature.com\/articles\/s41586-019-1724-z.epdf?author_access_token=lZH3nqPYtWJXfDA10W0CNNRgN0jAjWel9jnR3ZoTv0PSZcPzJFGNAZhOlk4deBCKzKm70KfinloafEF1bCCXL6IIHHgKaDkaTkBcTEv7aT-wqDoG1VeO9-wO3GEoAMF9bAOt7mJ0RWQnRVMbyfgH9A%3D%3D\">PBT in AlphaStar<\/a>, RL fine-tuning a behavior-cloned policy in <a href=\"https:\/\/www.davidsilver.uk\/wp-content\/uploads\/2020\/03\/unformatted_final_mastering_go.pdf\">AlphaGo<\/a> \/ <a href=\"https:\/\/openai.com\/blog\/vpt\/\">Minecraft<\/a>, RRL has not been studied as a research problem in its own right. To this end, we argue for developing general-purpose RRL approaches as opposed to prior ad-hoc solutions.\n<\/p>\n<h2>Case Study: Policy to Value Reincarnating RL<\/h2>\n<p>\nDifferent RRL problems can be instantiated depending on the kind of prior computational work provided. As a step towards developing broadly applicable RRL approaches, we present a case study on the setting of Policy to Value reincarnating RL (PVRL) for efficiently transferring an existing sub-optimal policy (teacher) to a standalone value-based RL agent (student). While a policy directly maps a given environment state (e.g., a game screen in Atari) to an action, value-based agents estimate the effectiveness of an action at a given state in terms of achievable future rewards, which allows them to learn from <a href=\"https:\/\/ai.googleblog.com\/2020\/04\/an-optimistic-perspective-on-offline.html\">previously collected data<\/a>.\n<\/p>\n<div style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEj2nodipCSsg4tWrNcxdG-YO0shx4xyakwqPlRIKHqGN1o8Kd-SVWEtFdIIcBgToSlnqJPJq3oktHsQL6VGgyiSPmeAAtGCOv63mIsKL8A6NX-4utJ0tp8UOBcIcCyMDI5EXDFc6FArzym-kxzJYrUeFLmOi5jAINIiT2IPilQ2h39eG_dwyq9ZW6wr9w\/s960\/image4.png\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" data-original-height=\"540\" data-original-width=\"960\" height=\"360\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEj2nodipCSsg4tWrNcxdG-YO0shx4xyakwqPlRIKHqGN1o8Kd-SVWEtFdIIcBgToSlnqJPJq3oktHsQL6VGgyiSPmeAAtGCOv63mIsKL8A6NX-4utJ0tp8UOBcIcCyMDI5EXDFc6FArzym-kxzJYrUeFLmOi5jAINIiT2IPilQ2h39eG_dwyq9ZW6wr9w\/w640-h360\/image4.png\" width=\"640\"\/><\/a><\/div>\n<p>\n  For a PVRL algorithm to be broadly useful, it should satisfy the following requirements:\n<\/p>\n<ul>\n<li><em>Teacher Agnostic<\/em>: The student shouldn\u2019t be constrained by the existing teacher policy\u2019s architecture or training algorithm.\n<\/li>\n<li><em>Weaning off the teacher<\/em>: It is undesirable to maintain dependency on past suboptimal teachers for successive reincarnations.\n<\/li>\n<li><em>Compute \/ Sample Efficient<\/em>: Reincarnation is only useful if it is cheaper than training from scratch.\n<\/li>\n<\/ul>\n<p>\nGiven the PVRL algorithm requirements, we evaluate whether existing approaches, designed with closely related goals, will suffice. We find that such approaches either result in small improvements over tabula rasa RL or degrade in performance when weaning off the teacher.\n<\/p>\n<p>\nTo address these limitations, we introduce a simple method, <em>QDagger<\/em>, in which the agent distills knowledge from the suboptimal teacher via an <a href=\"https:\/\/www.cs.cmu.edu\/~sross1\/publications\/Ross-AIStats11-NoRegret.pdf\">imitation algorithm<\/a> while simultaneously using its environment interactions for RL. We start with a <a href=\"https:\/\/www.deepmind.com\/publications\/human-level-control-through-deep-reinforcement-learning\">deep Q-network<\/a> (DQN) agent trained for 400M environment frames (a week of single-GPU training) and use it as the teacher for reincarnating student agents trained on only 10M frames (a few hours of training), where the teacher is weaned off over the first 6M frames. For benchmark evaluation, we report the <a href=\"https:\/\/ai.googleblog.com\/2021\/11\/rliable-towards-reliable-evaluation.html\">interquartile mean<\/a>\u00a0(IQM) metric from the <a href=\"https:\/\/github.com\/google-research\/rliable\">RLiable library<\/a>.<em> <\/em>As shown below for the PVRL setting on Atari games, we find that the QDagger RRL method outperforms prior approaches.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEi7IMcUj_NtK7YxrLzbDbxQYGzgjdviQSVSJH9qYCisFLYu72pXPDjoc5vxeN1rgkELigD_0Jlh67DoaDkj734WCk2hKEuwv7gQo2yS1F_JUKZWoqTBJgCwOvZxOUzHuRWqErY3vFozmLPFAWIDL_I4IT3X6olvMYbFq_4fD1Fdpn7yeEMW27p8tNxZwg\/s1168\/Screenshot%202022-11-02%2010.33.40%20AM.png\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"538\" data-original-width=\"1168\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEi7IMcUj_NtK7YxrLzbDbxQYGzgjdviQSVSJH9qYCisFLYu72pXPDjoc5vxeN1rgkELigD_0Jlh67DoaDkj734WCk2hKEuwv7gQo2yS1F_JUKZWoqTBJgCwOvZxOUzHuRWqErY3vFozmLPFAWIDL_I4IT3X6olvMYbFq_4fD1Fdpn7yeEMW27p8tNxZwg\/s16000\/Screenshot%202022-11-02%2010.33.40%20AM.png\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">Benchmarking PVRL algorithms on Atari, with teacher-normalized scores aggregated across 10 games. Tabula rasa DQN (\u2013\u00b7\u2013) obtains a normalized score of 0.4. Standard baseline approaches include <a href=\"https:\/\/arxiv.org\/abs\/1803.03835\">kickstarting<\/a>, <a href=\"https:\/\/ai.googleblog.com\/2022\/04\/efficiently-initializing-reinforcement.html\">JSRL<\/a>, <a href=\"https:\/\/www.deepmind.com\/publications\/making-efficient-use-of-demonstrations-to-solve-hard-exploration-problems\">rehearsal<\/a>, <a href=\"https:\/\/arxiv.org\/abs\/2006.04779\">offline RL pre-training<\/a> and <a href=\"https:\/\/arxiv.org\/abs\/1704.03732\">DQfD<\/a>. Among all methods, only QDagger surpasses teacher performance within 10 million frames and outperforms the teacher in 75% of the games.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Reincarnating RL in Practice<\/h2>\n<p>\nWe further examine the RRL approach on the <a href=\"https:\/\/arxiv.org\/abs\/1709.06009\">Arcade Learning Environment<\/a>, a widely used deep RL benchmark. First, we take a <a href=\"https:\/\/www.deepmind.com\/publications\/human-level-control-through-deep-reinforcement-learning\">Nature DQN<\/a> agent that uses the <a href=\"https:\/\/www.cs.toronto.edu\/~tijmen\/csc321\/slides\/lecture_slides_lec6.pdf\">RMSProp<\/a> optimizer and fine-tune it with the <a href=\"https:\/\/arxiv.org\/abs\/1412.6980\">Adam<\/a> optimizer to create a DQN (Adam) agent. While it is possible to train a DQN (Adam) agent from scratch, we demonstrate that fine-tuning Nature DQN with the Adam optimizer matches the from-scratch performance using 40x less data and compute.<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEiKJUYfsufdLqDPvuu-EZdz-Wpi3t69nl2i10Qb5lCjp3pIJjqse7jqNEhoh0WZgM4yNt7c6Fdr_Hz1xQJIir-9mtoaEDoQ2DuVo4RqL9d9xHP8nHHMdfJm8RsOVmd-_V2N1CKuF79mZ5ZWbAPyVH1TcjRO0-_vMZeeBvV4BUZrsLmbn6b-epf4saTLow\/s1168\/Screenshot%202022-11-02%2010.36.36%20AM.png\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"519\" data-original-width=\"1168\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEiKJUYfsufdLqDPvuu-EZdz-Wpi3t69nl2i10Qb5lCjp3pIJjqse7jqNEhoh0WZgM4yNt7c6Fdr_Hz1xQJIir-9mtoaEDoQ2DuVo4RqL9d9xHP8nHHMdfJm8RsOVmd-_V2N1CKuF79mZ5ZWbAPyVH1TcjRO0-_vMZeeBvV4BUZrsLmbn6b-epf4saTLow\/s16000\/Screenshot%202022-11-02%2010.36.36%20AM.png\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">Reincarnating DQN (Adam) via Fine-Tuning. The vertical separator corresponds to loading network weights and replay data for fine-tuning. <strong>Left:<\/strong> Tabula rasa Nature DQN nearly converges in performance after 200M environment frames.<strong> Right:<\/strong> Fine-tuning this Nature DQN agent using a reduced learning rate with the Adam optimizer for 20 million frames obtains similar results to DQN (Adam) trained from scratch for 400M frames.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\nGiven the DQN (Adam) agent as a starting point, fine-tuning is restricted to the 3-layer <a href=\"https:\/\/en.wikipedia.org\/wiki\/Convolutional_neural_network\">convolutional<\/a> architecture. So, we consider a more general reincarnation approach that leverages recent architectural and algorithmic advances without training from scratch. Specifically, we use QDagger to reincarnate another RL agent that uses a more advanced RL algorithm (<a href=\"https:\/\/arxiv.org\/pdf\/1710.02298.pdf\">Rainbow<\/a>) and a better neural network architecture (<a href=\"https:\/\/arxiv.org\/abs\/1802.01561\">Impala-CNN ResNet<\/a>) from the fine-tuned DQN (Adam) agent.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEg7BUJU7A9Iv0jwOGjZn_ZOQzy2CK8eby2xRTAVa9ju336McINSwj95E78RmE-jImJII8YBgeIvC4A6huvdXZnsQQvl_jFXu3o3-bI3XQ4yE_VK1wrCqTfDc4pR7Gh6KY05U-ydBQMWlncZE3ev4cKlnA5mOGHLC9UWf188nf4yttZb9hj3OowZRTfaVg\/s1168\/Screenshot%202022-11-02%2010.37.34%20AM.png\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"521\" data-original-width=\"1168\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEg7BUJU7A9Iv0jwOGjZn_ZOQzy2CK8eby2xRTAVa9ju336McINSwj95E78RmE-jImJII8YBgeIvC4A6huvdXZnsQQvl_jFXu3o3-bI3XQ4yE_VK1wrCqTfDc4pR7Gh6KY05U-ydBQMWlncZE3ev4cKlnA5mOGHLC9UWf188nf4yttZb9hj3OowZRTfaVg\/s16000\/Screenshot%202022-11-02%2010.37.34%20AM.png\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">Reincarnating a different architecture \/ algorithm via QDagger. The vertical separator is the point at which we apply offline pre-training using QDagger for reincarnation. <strong>Left:<\/strong> Fine-tuning DQN with Adam.<strong> Right: <\/strong>Comparison of a tabula rasa Impala-CNN Rainbow agent (sky blue) to an Impala-CNN Rainbow agent (pink) trained using QDagger RRL from the fine-tuned DQN (Adam). The reincarnated Impala-CNN Rainbow agent consistently outperforms its scratch counterpart. Note that further fine-tuning DQN (Adam) results in diminishing returns (yellow).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\nOverall, these results indicate that past research could have been accelerated by incorporating a RRL approach to designing agents, instead of re-training agents from scratch. Our <a href=\"https:\/\/arxiv.org\/pdf\/2206.01626.pdf\">paper<\/a> also contains results on the <a href=\"https:\/\/ai.googleblog.com\/2022\/02\/the-balloon-learning-environment.html\">Balloon Learning Environment<\/a>, where we demonstrate that RRL allows us to make progress on the problem of navigating stratospheric balloons using only a few hours of TPU-compute by reusing a <a href=\"https:\/\/ai.googleblog.com\/2020\/03\/massively-scaling-reinforcement.html\">distributed RL<\/a> agent trained on TPUs for more than a month.\n<\/p>\n<h2>Discussion<\/h2>\n<p>\nFairly comparing reincarnation approaches involves using the exact same computational work and workflow. Furthermore, the research findings in RRL that broadly generalize would be about how effective an algorithm is given access to existing computational work, e.g., we successfully applied QDagger developed using Atari for reincarnation on Balloon Learning Environment. As such, we speculate that research in reincarnating RL can branch out in two directions:\n<\/p>\n<ul>\n<li><strong>Standardized benchmarks with open-sourced computational work:<\/strong> Akin to <a href=\"https:\/\/en.wikipedia.org\/wiki\/Natural_language_processing\">NLP<\/a> and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Computer_vision\">vision<\/a>, where typically a small set of pre-trained models are common, research in RRL may also converge to a small set of open-sourced computational work (e.g., pre-trained teacher policies) on a given benchmark.\n<\/li>\n<li><strong>Real-world domains:<\/strong> Since obtaining higher performance has real-world impact in some domains, it incentivizes the community to reuse state-of-the-art agents and try to improve their performance.\n<\/li>\n<\/ul>\n<p>\nSee our <a href=\"https:\/\/arxiv.org\/pdf\/2206.01626.pdf\">paper<\/a> for a broader discussion on scientific comparisons, generalizability and reproducibility in RRL. Overall, we hope that this work motivates researchers to release computational work (e.g., model checkpoints) on which others could directly build. In this regard, we have open-sourced <a href=\"https:\/\/github.com\/google-research\/reincarnating_rl\">our code<\/a> and <a href=\"https:\/\/colab.research.google.com\/drive\/1ktlNni_vwFpFtCgUez-RHW0OdGc2U_Wv?usp=sharing\">trained agents<\/a> with their final replay buffers. We believe that reincarnating RL can substantially accelerate research progress by building on prior computational work, as opposed to always starting from scratch.\n<\/p>\n<h2>Acknowledgements<\/h2>\n<p>\n<em>This work was done in collaboration with Pablo Samuel Castro, Aaron Courville and Marc Bellemare. We\u2019d like to thank Tom Small for the animated figure used in this post. We are also grateful for feedback by the anonymous NeurIPS reviewers and several members of the Google Research team, DeepMind and Mila.<\/em>\n<\/p>\n<\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"http:\/\/ai.googleblog.com\/2022\/11\/beyond-tabula-rasa-reincarnating.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Posted by Rishabh Agarwal, Senior Research Scientist, and Max Schwarzer, Student Researcher, Google Research, Brain Team Reinforcement<\/p>\n","protected":false},"author":2,"featured_media":174,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-173","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google-ai"],"_links":{"self":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/173","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/comments?post=173"}],"version-history":[{"count":1,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/173\/revisions"}],"predecessor-version":[{"id":2978,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/173\/revisions\/2978"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media\/174"}],"wp:attachment":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media?parent=173"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/categories?post=173"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/tags?post=173"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}