TY - GEN
T1 - Multi-agent Reinforcement Learning Based Collaborative Multi-task Scheduling for Vehicular Edge Computing
AU - Li, Peisong
AU - Xiao, Ziren
AU - Wang, Xinheng
AU - Huang, Kaizhu
AU - Huang, Yi
AU - Tchernykh, Andrei
N1 - Publisher Copyright:
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
PY - 2024
Y1 - 2024
N2 - Nowadays, connected vehicles equipped with advanced computing and communication capabilities are increasingly viewed as mobile computing platforms capable of offering various in-vehicle services, including but not limited to autonomous driving, collision avoidance, and parking assistance. However, providing these time-sensitive services requires the fusion of multi-task processing results from multiple sensors in connected vehicles, which poses a significant challenge to designing an effective task scheduling strategy that can minimize service requests’ completion time and reduce vehicles’ energy consumption. In this paper, a multi-agent reinforcement learning-based collaborative multi-task scheduling method is proposed to achieve a joint optimization on completion time and energy consumption. Firstly, the reinforcement learning-based scheduling method can allocate multiple tasks dynamically according to the dynamic-changing environment. Then, a cloud-edge-end collaboration scheme is designed to complete the tasks efficiently. Furthermore, the transmission power can be adjusted based on the position and mobility of vehicles to reduce energy consumption. The experimental results demonstrate that the designed task scheduling method outperforms benchmark methods in terms of comprehensive performance.
AB - Nowadays, connected vehicles equipped with advanced computing and communication capabilities are increasingly viewed as mobile computing platforms capable of offering various in-vehicle services, including but not limited to autonomous driving, collision avoidance, and parking assistance. However, providing these time-sensitive services requires the fusion of multi-task processing results from multiple sensors in connected vehicles, which poses a significant challenge to designing an effective task scheduling strategy that can minimize service requests’ completion time and reduce vehicles’ energy consumption. In this paper, a multi-agent reinforcement learning-based collaborative multi-task scheduling method is proposed to achieve a joint optimization on completion time and energy consumption. Firstly, the reinforcement learning-based scheduling method can allocate multiple tasks dynamically according to the dynamic-changing environment. Then, a cloud-edge-end collaboration scheme is designed to complete the tasks efficiently. Furthermore, the transmission power can be adjusted based on the position and mobility of vehicles to reduce energy consumption. The experimental results demonstrate that the designed task scheduling method outperforms benchmark methods in terms of comprehensive performance.
KW - Cloud-edge-end collaboration
KW - Multi-agent reinforcement learning
KW - Multi-task scheduling
KW - Vehicular edge computing
UR - http://www.scopus.com/inward/record.url?scp=85187774882&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-54531-3_1
DO - 10.1007/978-3-031-54531-3_1
M3 - Conference Proceeding
AN - SCOPUS:85187774882
SN - 9783031545306
T3 - Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
SP - 3
EP - 22
BT - Collaborative Computing
A2 - Gao, Honghao
A2 - Wang, Xinheng
A2 - Voros, Nikolaos
PB - Springer Science and Business Media Deutschland GmbH
T2 - 19th EAI International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2023
Y2 - 4 October 2023 through 6 October 2023
ER -