This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Martinez-Piazuelo, Juan, Daniel E. Ochoa, Nicanor Quijano and Luis Felipe Giraldo. “A Multi-Critic Reinforcement Learning Method: An Application to Multi-Tank Water Systems.” IEEE Access 8 (2020): 173227–173238.Martinez-PiazueloJuanOchoaDaniel E.QuijanoNicanorGiraldoLuis Felipe“A Multi-Critic Reinforcement Learning Method: An Application to Multi-Tank Water Systems.”IEEE Access82020173227173238Search in Google Scholar
Wameedh Riyadh Abdul-Adheem, Ibraheem Kasim Ibraheem, Decoupled control scheme for output tracking of a general industrial nonlinear MIMO system using improved active disturbance rejection scheme, Alexandria Engineering Journal, Volume 58, Issue 4,2019, Pages 1145–1156, ISSN 1110-0168Abdul-AdheemWameedh RiyadhIbraheemIbraheem KasimDecoupled control scheme for output tracking of a general industrial nonlinear MIMO system using improved active disturbance rejection schemeAlexandria Engineering Journal584201911451156ISSN 1110-0168Search in Google Scholar
Xu, Jin, Han Li, and Qingxin Zhang. 2023. “Multivariable Coupled System Control Method Based on Deep Reinforcement Learning” Sensors 23, no. 21: 8679.XuJinLiHanZhangQingxin2023“Multivariable Coupled System Control Method Based on Deep Reinforcement Learning”Sensors23218679Search in Google Scholar
Ye L, Jiang P. Optimization control of the double-capacity water tank-level system using the deep deterministic policy gradient algorithm. Engineering Reports. 2023; 5(11): e12668.YeLJiangPOptimization control of the double-capacity water tank-level system using the deep deterministic policy gradient algorithmEngineering Reports2023511e12668Search in Google Scholar
Almeida, Alexandre Marques de, Marcelo Kaminski Lenzi, and Ervin Kaminski Lenzi. 2020. “A Survey of Fractional Order Calculus Applications of Multiple-Input, Multiple-Output (MIMO) Process Control” Fractal and Fractional 4, no. 2: 22.AlmeidaAlexandre Marques deLenziMarcelo KaminskiLenziErvin Kaminski2020“A Survey of Fractional Order Calculus Applications of Multiple-Input, Multiple-Output (MIMO) Process Control”Fractal and Fractional4222Search in Google Scholar
Radac, Mircea-Bogdan, Radu-Emil Precup and Raul-Cristian Roman. “Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.” ISA transactions 73 (2018): 227–238.RadacMircea-BogdanPrecupRadu-EmilRomanRaul-Cristian“Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.”ISA transactions732018227238Search in Google Scholar
Hajare, V.D., Patre, B.M., Khandekar, A.A. et al. Decentralized PID controller design for TITO processes with experimental validation. Int. J. Dynam. Control 5, 583–595 (2017).HajareV.D.PatreB.M.KhandekarA.A.Decentralized PID controller design for TITO processes with experimental validationInt. J. Dynam. Control55835952017Search in Google Scholar
Silver David, Guy Lever, Nicolas Manfred Otto Heess, Thomas Degris, Daan Wierstra and Martin A. Riedmiller. “Deterministic Policy Gradient Algorithms.” International Conference on Machine Learning (2014).SilverDavidLeverGuyHeessNicolas Manfred OttoDegrisThomasWierstraDaanRiedmillerMartin A.“Deterministic Policy Gradient Algorithms.”International Conference on Machine Learning2014Search in Google Scholar
M. D. L. Reddy, P. K. Padhy and I. Ahmad Ansari, “Auto-tuning Method for decentralized PID controller of TITO systems using firefly algorithm,” 2019 International Conference on Intelligent Computing and Control Systems (ICCS), Madurai, India, 2019, pp. 683–688.ReddyM. D. L.PadhyP. K.Ahmad AnsariI.“Auto-tuning Method for decentralized PID controller of TITO systems using firefly algorithm,”2019 International Conference on Intelligent Computing and Control Systems (ICCS)Madurai, India2019683688Search in Google Scholar
Khalid, Junaid & Anwari, Makbul & Khan, Muhammad & Hidayat, T. (2022). Efficient Load Frequency Control of Renewable Integrated Power System: A Twin Delayed DDPG-Based Deep Reinforcement Learning Approach. IEEE Access. 10. 1–1.KhalidJunaidAnwariMakbulKhanMuhammadHidayatT.2022Efficient Load Frequency Control of Renewable Integrated Power System: A Twin Delayed DDPG-Based Deep Reinforcement Learning ApproachIEEE Access1011Search in Google Scholar
Ould Mohamed and Mohamed Vall, Design of Decoupled PI Controllers for Two-Input Two-Output Networked Control Systems with Intrinsic and Network-Induced Time Delays. Acta Mechanica et Automatica, 15, 201–208, December 2021.MohamedOuldVallMohamedDesign of Decoupled PI Controllers for Two-Input Two-Output Networked Control Systems with Intrinsic and Network-Induced Time DelaysActa Mechanica et Automatica15201208December2021Search in Google Scholar
Reshma N. Pawar and Sharad P. Jadhav Design of NDT and PSO based Decentralized PID Controller for Wood-Berry Distillation Column IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI-2017).PawarReshma N.JadhavSharad P.Design of NDT and PSO based Decentralized PID Controller for Wood-Berry Distillation ColumnIEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI-2017)Search in Google Scholar
Wood RK, Berry MW (1973) Terminal composition control of a binary distillation column. Chem Eng Sci 28(9):1707–1717.WoodRKBerryMW1973Terminal composition control of a binary distillation columnChem Eng Sci28917071717Search in Google Scholar
Tavakoli S, Griffin I, Fleming PJ (2006) Tuning of decentralized PI(PID) controllers for TITO processes. Control Eng Pract 14(9):1069–1080.TavakoliSGriffinIFlemingPJ2006Tuning of decentralized PI(PID) controllers for TITO processesControl Eng Pract14910691080Search in Google Scholar
Qiu C, Hu Y, Chen Y, Zeng B. Deep deterministic policy gradient (DDPG)-based energy harvesting wireless communications. IEEE Internet Things J. 2019;6(5):8577–8588.QiuCHuYChenYZengBDeep deterministic policy gradient (DDPG)-based energy harvesting wireless communicationsIEEE Internet Things J20196585778588Search in Google Scholar
Yan, Z., & Xu, Y. (2020). A Multi-Agent Deep Reinforcement Learning Method for Cooperative Load Frequency Control of a Multi-Area Power System. IEEE Transactions on Power Systems, 35, 4599–4608.YanZ.XuY.2020A Multi-Agent Deep Reinforcement Learning Method for Cooperative Load Frequency Control of a Multi-Area Power SystemIEEE Transactions on Power Systems3545994608Search in Google Scholar
Skiparev, V., Belikov, J., Petlenkov, E., & Levron, Y. (2022). Reinforcement Learning based MIMO Controller for Virtual Inertia Control in Isolated Microgrids. 2022 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), 1–5.SkiparevV.BelikovJ.PetlenkovE.LevronY.2022Reinforcement Learning based MIMO Controller for Virtual Inertia Control in Isolated Microgrids2022 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe)15Search in Google Scholar
Du, Y., Zandi, H., Kotevska, O., Kurte, K., Munk, J., Amasyali, K., Mckee, E., & Li, F. (2021). Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning. Applied Energy, 281, 116117.DuY.ZandiH.KotevskaO.KurteK.MunkJ.AmasyaliK.MckeeE.LiF.2021Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learningApplied Energy281116117Search in Google Scholar
Ho, T, Tat, T, Ngo, H., Nguyen, T, Bui, D., Le, T., Le, V, & Huynh, L. (2023). Applying DDPG Algorithm to Swing-Up and Balance Control for a Double Inverted Pendulum on a Cart. Robotica & Management.HoTTatTNgoH.NguyenTBuiD.LeT.LeVHuynhL.2023Applying DDPG Algorithm to Swing-Up and Balance Control for a Double Inverted Pendulum on a CartRobotica & ManagementSearch in Google Scholar
Wang, Q-G., Zou, B., Lee, TH., Bi, Q. (1997). Auto-tuning of multivariable PID controllers from decentralized relay feedback. Automatica, 33(3):319–30.WangQ-G.ZouB.LeeTH.BiQ.1997Auto-tuning of multivariable PID controllers from decentralized relay feedbackAutomatica33331930Search in Google Scholar
A. Alharin, T.-N. Doan, and M. Sartipi, “Reinforcement Learning Interpretation Methods: A Survey,” IEEE Access, vol. 8, pp. 171058–171077, 2020.AlharinA.DoanT.-N.SartipiM.“Reinforcement Learning Interpretation Methods: A Survey,”IEEE Access81710581710772020Search in Google Scholar
T. M. Luu and C. D. Yoo, “Hindsight Goal Ranking on Replay Buffer for Sparse Reward Environment,” IEEE Access, vol. 9, pp. 51996–52007, Mar. 2021.LuuT. M.YooC. D.“Hindsight Goal Ranking on Replay Buffer for Sparse Reward Environment,”IEEE Access95199652007Mar.2021Search in Google Scholar
Kiran, M. and Ozyildirim, M. Hyperparameter Tuning for Deep Reinforcement Learning Applications. arXiv: 2201.11182. (2022).KiranM.OzyildirimM.Hyperparameter Tuning for Deep Reinforcement Learning ApplicationsarXiv: 2201.11182.2022Search in Google Scholar
Felten, F, Gareev, D, Talbi, E.G. and Danoy, G. Hyperparameter Optimization for Multi-Objective Reinforcement Learning. arXiv: 2310.16487.(2023).FeltenFGareevDTalbiE.G.DanoyG.Hyperparameter Optimization for Multi-Objective Reinforcement LearningarXiv: 2310.16487.2023Search in Google Scholar