Skip to content

Training Value Functions via Classification for Scalable Deep Reinforcement Learning: Study by Google DeepMind Researchers and Others Mohammad Asjad Artificial Intelligence Category – MarkTechPost

  • by

​[[{“value”:”

Value functions are a core component of deep reinforcement learning (RL). Value functions, implemented with neural networks, undergo training via mean squared error regression to align with bootstrapped target values. However, upscaling value-based RL methods utilizing regression for extensive networks, like high-capacity Transformers, has posed challenges. This obstacle sharply differs from supervised learning, where leveraging cross-entropy classification loss enables reliable scaling to vast networks.

In deep learning, classification tasks show effectiveness with large neural networks, while regression tasks can benefit from reframing as classification, enhancing performance. This shift involves converting real-valued targets to categorical labels and minimizing categorical cross-entropy. Despite successes in supervised learning, scaling value-based RL methods relying on regression, like deep Q-learning and actor-critic, remains challenging, particularly with large networks such as transformers.

Researchers from Google DeepMind and others have undertaken significant studies to address this problem. Their work extensively examines methods for training value functions with categorical cross-entropy loss in deep RL. The findings demonstrate substantial enhancements in performance, robustness, and scalability compared to conventional regression-based methods. The HL-Gauss approach, in particular, yields significant improvements across diverse tasks and domains. Diagnostic experiments reveal that categorical cross-entropy effectively addresses challenges in deep RL, offering valuable insights into more effective learning algorithms.

Their approach transforms the regression problem in TD learning into a classification problem. Instead of minimizing the squared distance between scalar Q-values and TD targets, it reduces the distance between categorical distributions representing these quantities. The categorical representation of the action-value function is defined, allowing for the utilization of cross-entropy loss for TD learning. Two strategies are explored: Two-Hot, HL-Gauss, and C51 for directly modeling the categorical return distribution. These methods aim to improve robustness and scalability in deep RL.

The experiments demonstrate that a cross-entropy loss, HL-Gauss, consistently outperforms traditional regression losses like MSE across various domains, including Atari games, chess, language agents, and robotic manipulation. It shows improved performance, scalability, and sample efficiency, indicating its efficacy in training value-based deep RL models. HL-Gauss also enables better scaling with larger networks and achieves superior results compared to regression-based and distributional RL approaches.

In conclusion, the researchers from Google DeepMind and others have demonstrated that reframing regression as classification and minimizing categorical cross-entropy, rather than mean squared error, leads to significant enhancements in performance and scalability across various tasks and neural network architectures in value-based RL methods. These improvements result from the cross-entropy loss’s capacity to facilitate more expressive representations and effectively manage noise and nonstationarity. Although these challenges were not eliminated, the findings underscore the substantial impact of this adjustment.

Check out the PaperAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 38k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

You may also like our FREE AI Courses….

The post Training Value Functions via Classification for Scalable Deep Reinforcement Learning: Study by Google DeepMind Researchers and Others appeared first on MarkTechPost.

“}]] [[{“value”:”Value functions are a core component of deep reinforcement learning (RL). Value functions, implemented with neural networks, undergo training via mean squared error regression to align with bootstrapped target values. However, upscaling value-based RL methods utilizing regression for extensive networks, like high-capacity Transformers, has posed challenges. This obstacle sharply differs from supervised learning, where leveraging
The post Training Value Functions via Classification for Scalable Deep Reinforcement Learning: Study by Google DeepMind Researchers and Others appeared first on MarkTechPost.”}]]  Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *