Metrics for Assessing Generalization of Deep Reinforcement Learning in Parameterized Environments
25 dic 2023
Acerca de este artículo
Publicado en línea: 25 dic 2023
Páginas: 45 - 61
Recibido: 24 jun 2023
Aceptado: 19 oct 2023
DOI: https://doi.org/10.2478/jaiscr-2024-0003
Palabras clave
© 2024 Maciej Aleksandrowicz et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In this work, a study focusing on proposing generalization metrics for Deep Reinforcement Learning (DRL) algorithms was performed. The experiments were conducted in DeepMind Control (DMC) benchmark suite with parameterized environments. The performance of three DRL algorithms in selected ten tasks from the DMC suite has been analysed with existing generalization gap formalism and the proposed ratio and decibel metrics. The results were presented with the proposed methods: average transfer metric and plot for environment normal distribution. These efforts allowed to highlight major changes in the model’s performance and add more insights about making decisions regarding models’ requirements.