Open Access

Application of Deep Neural Networks in Multi-Hop Wireless Sensor Network (WSN) Channel Optimization

  
Apr 11, 2025

Cite
Download Cover

Figure 1.

Proposed deep learning-based channel and routing optimization framework. Feature extraction captures spatial-temporal variations, reinforcement learning optimizes policy decisions, and final routing and channel selection are dynamically adjusted.
Proposed deep learning-based channel and routing optimization framework. Feature extraction captures spatial-temporal variations, reinforcement learning optimizes policy decisions, and final routing and channel selection are dynamically adjusted.

Figure 2.

Reinforcement learning-based policy optimization process. The RL agent iteratively updates policies based on observed network states and reward feedback.
Reinforcement learning-based policy optimization process. The RL agent iteratively updates policies based on observed network states and reward feedback.

Figure 3.

Comparison of network throughput for different methods. The proposed DNN-RL model maintains higher throughput under increasing network loads.
Comparison of network throughput for different methods. The proposed DNN-RL model maintains higher throughput under increasing network loads.

Figure 4.

Energy depletion over time for different optimization methods. The proposed DNN-RL approach maintains a more gradual depletion rate, ensuring extended network lifetime.
Energy depletion over time for different optimization methods. The proposed DNN-RL approach maintains a more gradual depletion rate, ensuring extended network lifetime.

Figure 5.

Packet delivery ratio (PDR) under varying interference levels. The proposed method maintains a higher PDR, demonstrating robustness against interference.
Packet delivery ratio (PDR) under varying interference levels. The proposed method maintains a higher PDR, demonstrating robustness against interference.

Comparison of Energy Efficiency

Method Energy per Packet (mJ) Network Lifetime (hours)
DNN-RL (Proposed) 0.12 145.3
Q-RL (Baseline) 0.15 132.1
Heuristic Method (HM) 0.21 98.7
Language:
English