Uneingeschränkter Zugang

Advanced Traffic Signal Control System Using Deep Double Q-Learning with Pedestrian Factors

,  und   
18. März 2025

Zitieren
COVER HERUNTERLADEN

Xiaoyuan Liang, Tan Yan, Joyoung Lee, and Guiling Wang. A distributed intersection management protocol for safety, efficiency, and driver’s comfort. IEEE Internet of Things Journal, 5(3):1924–1935, 2018. Search in Google Scholar

Reza Zanjirani Farahani, Elnaz Miandoabchi, W.Y. Szeto, and Hannaneh Rashidi. A review of urban transportation network design problems. European Journal of Operational Research, 229(2):281–302, 2013. Search in Google Scholar

Dongbin Zhao, Yujie Dai, and Zhen Zhang. Computational intelligence in urban traffic signal control: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4):485–494, 2012. Search in Google Scholar

Ammar Haydari and Yasin Yılmaz. Deep reinforcement learning for intelligent transportation systems: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(1):11–32, 2022. Search in Google Scholar

Tala Talaei Khoei, Hadjar Ould Slimane, and Naima Kaabouch. Deep learning: systematic review, models, challenges, and research directions. Neural Computing and Applications, 35, 2023. Search in Google Scholar

Carlos Gershenson. Self-organizing traffic lights, 2005. Search in Google Scholar

Pravin Varaiya. The max-pressure controller for arbitrary networks of signalized intersections. In Satish V. Ukkusuri and Kaan Ozbay, editors, Advances in Dynamic Network Modeling in Complex Transportation Systems, pages 27–66. Springer New York, New York, NY, 2013. Search in Google Scholar

Saptarshi Sengupta, Sanchita Basak, Pallabi Saikia, Sayak Paul, Vasilios Tsalavoutis, Frederick Atiah, Vadlamani Ravi, and Alan Peters. A review of deep learning with special emphasis on architectures, applications and recent trends. Knowledge-Based Systems, 194:105596, 2020. Search in Google Scholar

Juntao Gao, Yulong Shen, Jia Liu, Minoru Ito, and Norio Shiratori. Adaptive traffic signal control: Deep reinforcement learning algorithm with experience replay and target network, 2017. Search in Google Scholar

Hua Si Li-Juan Liu and Hamid Reza Karimi. Intelligent emergency traffic signal control system with pedestrian access. Information Sciences, 267:120805, 2024. Search in Google Scholar

Ashish Tigga, Lopamudra Hota, Sanjeev Patel, and Arun Kumar. A deep q-learning-based adaptive traffic light control system for urban safety. In 2022 4th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), pages 2430–2435, 2022. Search in Google Scholar

Leilei Kang, Hao Huang, Weike Lu, and Lan Liu. A dueling deep q-network method for low-carbon traffic signal control. volume 141, page 110304, 2023. Search in Google Scholar

Bin Wang, Zhengkun He, Jinfang Sheng, and Yu Chen. Deep reinforcement learning for traffic light timing optimization. Processes, 10(11), 2022. Search in Google Scholar

Jianfeng Gu, Yong Fang, Zhichao Sheng, and Peng Wen. Double deep q-network with a dual-agent for traffic signal control. Applied Sciences, 10(5), 2020. Search in Google Scholar

Tarek Amine Haddad, Djalal Hedjazi, and Sofiane Aouag. A new deep reinforcement learning-based adaptive traffic light control approach for isolated intersection. In 2022 5th International Symposium on Informatics and its Applications (ISIA), pages 1–6, 2022. Search in Google Scholar

Yanming Feng and Yongrong Wu. Environmental adaptive urban traffic signal control based on reinforcement learning algorithm. Journal of Physics: Conference Series, 1650(3):032097, oct 2020. Search in Google Scholar

Salah Bouktif, Abderraouf Cheniki, Ali Ouni, and Hesham El-Sayed. Deep reinforcement learning for traffic signal control with consistent state and reward design approach. Knowledge-Based Systems, 267:110440, 2023. Search in Google Scholar

Peng Mei, Hamid Reza Karimi, Hehui Xie, Fei Chen, Cong Huang, and Shichun Yang. A deep reinforcement learning approach to energy management control with connected information for hybrid electric vehicles. Engineering Applications of Artificial Intelligence, 123:106239, 2023. Search in Google Scholar

Alexandre Heuillet, Fabien Couthouis, and Natalia Díaz-Rodríguez. Explainability in deep reinforcement learning. Knowledge-Based Systems, 214:106685, 2021. Search in Google Scholar

Michal Golan and Nahum Shimkin. Markov decision processes with burstiness constraints. European Journal of Operational Research, 312(3):877–889, 2024. Search in Google Scholar

Degui Xiao, Xuefeng Yang, Jianfang Li, and Merabtene Islam. Attention deep neural network for lane marking detection. Knowledge-Based Systems, 194:105584, 2020. Search in Google Scholar

Qian Zhou, Yang Lian, Jiayang Wu, Mengyue Zhu, Haiyong Wang, and Jinli Cao. An optimized q-learning algorithm for mobile robot local path planning. volume 286, page 111400, 2024. Search in Google Scholar

Jian Chen and JianYing Wu. Dynamic adaptive streaming based on deep reinforcement learning. Journal of Physics: Conference Series, 1237(2):022124, jun 2019. Search in Google Scholar

Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Belle-mare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. In Nature, volume 518, pages 529–533, 2015. Search in Google Scholar

E R Maiorov, I R Ludan, J D Motta, and O N Saprykin. Developing a microscopic city model in sumo simulation system. Journal of Physics: Conference Series, 1368(4):042081, nov 2019. Search in Google Scholar

Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
4 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Informatik, Künstliche Intelligenz, Datanbanken und Data Mining