Optimizing Traffic Light Control using Enhanced DQN: Minimizing Waiting Time for Regular and Emergency Vehicles
Published Online: Jun 16, 2025
Page range: 266 - 275
DOI: https://doi.org/10.2478/ttj-2025-0020
Keywords
© 2025 Wissam Bouzi et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
An efficient traffic management system is essential to minimize traffic problems and ensure the rapid circulation of emergency vehicles. This research proposes a new single-agent deep reinforcement-learning model using a deep Q-Network (DQN) to optimize traffic lights, aiming to reduce waiting times and increase vehicle speed, with particular emphasis on emergency vehicles. Our method incorporates a new state representation, which captures variations in vehicle density and speed that directly influence the reward structure to prioritize both traffic flow and emergency vehicle response times. The decision of the agent is enhanced by a replay memory mechanism, which ensures that experiences are effectively used in learning. The model’s effectiveness was tested in a simulated environment using SUMO, showing significant improvements in traffic management compared to traditional methods. Experimental results show that our system significantly reduces average waiting times and improves emergency vehicle prioritization.