Reinforcement Learning-Based Opportunistic Routing Protocol for Underwater Acoustic Sensor Networks

Ying Zhang*, Zheming Zhang, Lei Chen, Xinheng Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

56 Citations (Scopus)

Abstract

Due to the problems of high bit error rate and delay, low bandwidth and limited energy of sensor nodes in underwater acoustic sensor network (UASN), it is particularly important to design a routing protocol with high reliability, strong robustness, low end-to-end delay and high energy efficiency which can flexibly be employed in dynamic network environment. Therefore, a reinforcement learning-based opportunistic routing protocol (RLOR) is proposed in this paper by combining the advantages of opportunistic routing and reinforcement learning algorithm. The RLOR is a kind of distributed routing approach, which comprehensively considers nodes' peripheral status to select the appropriate relay nodes. Additionally, a recovery mechanism is employed in RLOR to enable the packets to bypass the void area efficiently and continue to forward, which improves the delivery rate of data in some sparse networks. The simulation results show that, compared with other representative underwater routing protocols, the proposed RLOR performs well in end-to-end delay, reliability, energy efficiency and other aspects in underwater dynamic network environments.

Original languageEnglish
Article number9351791
Pages (from-to)2756-2770
Number of pages15
JournalIEEE Transactions on Vehicular Technology
Volume70
Issue number3
DOIs
Publication statusPublished - Mar 2021

Keywords

  • UASNs
  • opportunistic routing
  • reinforcement learning
  • reliability
  • routing void

Cite this