Metrics for Assessing Generalization of Deep Reinforcement Learning in Parameterized Environments
Dec 25, 2023
About this article
Published Online: Dec 25, 2023
Page range: 45 - 61
Received: Jun 24, 2023
Accepted: Oct 19, 2023
DOI: https://doi.org/10.2478/jaiscr-2024-0003
Keywords
© 2024 Maciej Aleksandrowicz et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In this work, a study focusing on proposing generalization metrics for Deep Reinforcement Learning (DRL) algorithms was performed. The experiments were conducted in DeepMind Control (DMC) benchmark suite with parameterized environments. The performance of three DRL algorithms in selected ten tasks from the DMC suite has been analysed with existing generalization gap formalism and the proposed ratio and decibel metrics. The results were presented with the proposed methods: average transfer metric and plot for environment normal distribution. These efforts allowed to highlight major changes in the model’s performance and add more insights about making decisions regarding models’ requirements.