Metrics for Assessing Generalization of Deep Reinforcement Learning in Parameterized Environments
25 dic 2023
INFORMAZIONI SU QUESTO ARTICOLO
Pubblicato online: 25 dic 2023
Pagine: 45 - 61
Ricevuto: 24 giu 2023
Accettato: 19 ott 2023
DOI: https://doi.org/10.2478/jaiscr-2024-0003
Parole chiave
© 2024 Maciej Aleksandrowicz et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In this work, a study focusing on proposing generalization metrics for Deep Reinforcement Learning (DRL) algorithms was performed. The experiments were conducted in DeepMind Control (DMC) benchmark suite with parameterized environments. The performance of three DRL algorithms in selected ten tasks from the DMC suite has been analysed with existing generalization gap formalism and the proposed ratio and decibel metrics. The results were presented with the proposed methods: average transfer metric and plot for environment normal distribution. These efforts allowed to highlight major changes in the model’s performance and add more insights about making decisions regarding models’ requirements.