Energy-Efficient and Accelerated Resource Allocation in O-RAN Slicing Using Deep Reinforcement Learning and Transfer Learning
Publié en ligne: 19 sept. 2024
Pages: 132 - 150
Reçu: 20 juil. 2024
Accepté: 29 août 2024
DOI: https://doi.org/10.2478/cait-2024-0029
Mots clés
© 2024 Heba Sherif et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Next Generation Wireless Networks (NGWNs) have two main components: Network Slicing and Open Radio Access Networks (O-RAN). NS is needed to handle various Quality of Services (QoS). O-RAN adopts an open environment for network vendors and Mobile Network Operators (MNOs). In recent years, Deep Reinforcement Learning (DRL) approaches have been proposed to solve some key issues in NGWNs. The primary obstacles preventing the DRL deployment are being slowly converged and unstable. Additionally, these algorithms have enormous carbon emissions that negatively impact climate change. This paper tackles the dynamic allocation problem of O-RAN radio resources for better QoS, faster convergence, stability, lower energy and power consumption, and reduced carbon emissions. Firstly, we develop an agent with a newly designed latency-based reward function and a top-k filtration mechanism for actions. Then, we propose a policy Transfer Learning approach to accelerate agent convergence. We compared our model to another two models.