Knowledge Transfer in Model-Based Reinforcement Learning Agents for Efficient Multi-Task Learning
Loading...
Date
2025
Authors
Kuzmenko, Dmytro
Shvai, Nadiya
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
We propose an efficient knowledge transfer approach for modelbased reinforcement learning, addressing the challenge of deploying large world models in resource-constrained environments. Our method distills a high-capacity multi-task agent (317M parameters) into a compact 1M parameter model, achieving state-of-the-art performance on the MT30 benchmark with a normalized score of 28.45, a substantial improvement over the original 1M parameter model’s score of 18.93. This demonstrates the ability of our distillation technique to consolidate complex multi-task knowledge effectively. Additionally, we apply FP16 post-training quantization, reducing the model size by 50% while maintaining performance. Our work bridges the gap between the power of large models and practical deployment constraints, offering a scalable solution for efficient and accessible multi-task reinforcement learning in robotics and other resource-limited domains.
Description
Keywords
Model-Based Reinforcement Learning, Multi-Task Learning, Knowledge Distillation, Model Compression, Efficient RL Agents, conference materials
Citation
Kuzmenko D. Knowledge Transfer in Model-Based Reinforcement Learning Agents for Efficient Multi-Task Learning / Dmytro Kuzmenko, Nadiya Shvai // Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. - 2025. - P. 2597-2599. - https://doi.org/10.48550/arXiv.2501.05329