Otwarty dostęp

Deep reinforcement learning based computing offloading in unmanned aerial vehicles for disaster management


Zacytuj

The emergence of Internet of Things enabled with mobile computing has the applications in the field of unmanned aerial vehicle (UAV) development. The development of mobile edge computational offloading in UAV is dependent on low latency applications such as disaster management, Forest fire control and remote operations. The task completion efficiency is improved by means of using edge intelligence algorithm and the optimal offloading policy is constructed on the application of deep reinforcement learning (DRL) in order to fulfill the target demand and to ease the transmission delay. The joint optimization curtails the weighted sum of average energy consumption and execution delay. This edge intelligence algorithm combined with DRL network exploits computing operation to increase the probability that at least one of the tracking and data transmission is usable. The proposed joint optimization significantly performs well in terms of execution delay, offloading cost and effective convergence over the prevailing methodologies proposed for UAV development. The proposed DRL enables the UAV to real-time decisions based on the disaster scenario and computing resources availability.

eISSN:
1339-309X
Język:
Angielski
Częstotliwość wydawania:
6 razy w roku
Dziedziny czasopisma:
Engineering, Introductions and Overviews, other