Kuzmenko, DmytroShvai, Nadiya2025-11-192025-11-192025Kuzmenko D. Knowledge Transfer in Model-Based Reinforcement Learning Agents for Efficient Multi-Task Learning / Dmytro Kuzmenko, Nadiya Shvai // Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. - 2025. - P. 2597-2599. - https://doi.org/10.48550/arXiv.2501.05329https://doi.org/10.48550/arXiv.2501.05329https://ekmair.ukma.edu.ua/handle/123456789/37605We propose an efficient knowledge transfer approach for modelbased reinforcement learning, addressing the challenge of deploying large world models in resource-constrained environments. Our method distills a high-capacity multi-task agent (317M parameters) into a compact 1M parameter model, achieving state-of-the-art performance on the MT30 benchmark with a normalized score of 28.45, a substantial improvement over the original 1M parameter model’s score of 18.93. This demonstrates the ability of our distillation technique to consolidate complex multi-task knowledge effectively. Additionally, we apply FP16 post-training quantization, reducing the model size by 50% while maintaining performance. Our work bridges the gap between the power of large models and practical deployment constraints, offering a scalable solution for efficient and accessible multi-task reinforcement learning in robotics and other resource-limited domains.enModel-Based Reinforcement LearningMulti-Task LearningKnowledge DistillationModel CompressionEfficient RL Agentsconference materialsKnowledge Transfer in Model-Based Reinforcement Learning Agents for Efficient Multi-Task LearningConference materials