Факультет інформатики
Permanent URI for this collection
Browse
Browsing Факультет інформатики by Author "Severhin, Oleksandr"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Efficient Policy Learning via Knowledge Distillation for Robotic Manipulation(Національний університет "Києво-Могилянська академія", 2025) Severhin, Oleksandr; Kuzmenko, Dmytro; Shvai, NadiyaThe work focuses on the computational intractability of large-scale Reinforcement Learning (RL) models for robotic manipulation. While world-like models like TD-MPC2 demonstrate high performance in various manipulative tasks, their immense parameter count (e.g., 317M) hinders training and deployment on resource-constrained hardware. This research investigates Knowledge Distillation (KD) with a loss function specifically described in [1] and [2] as a primary method for model compression. This involves training a lightweight "student" model to mimic the behavior of a large, pre-trained "teacher" model. Unlike in supervised learning, distilling knowledge in RL is uniquely complex; the objective is to transfer a dynamic, reward-driven policy, not a simple input-output function.