Energy Optimization of Multitask DNN Inference in MEC-Assisted XR Devices: A Lyapunov-Guided Reinforcement Learning Approach

Yanzan Sun, Jiacheng Qiu, Guangjin Pan*, Shugong Xu, Shunqing Zhang, Xiaoyun Wang, Shuangfeng Han

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Extended reality (XR), blending virtual and real worlds, is a key application of future networks. While AI advancements enhance XR capabilities, they also impose significant computational and energy challenges on lightweight XR devices. In this article, we developed a distributed queue model for multitask deep neural network inference, addressing issues of resource competition and queue coupling. In response to the challenges posed by the high energy consumption and limited resources of XR devices, we designed a dual time-scale joint optimization strategy for model partitioning and resource allocation, formulated as a bi-level optimization problem. This strategy aims to minimize the total energy consumption of XR devices while ensuring queue stability and adhering to computational and communication resource constraints. To tackle this problem, we devised a Lyapunov-guided proximal policy optimization algorithm, named LyaPPO. Through numerical results, we show that our LyaPPO algorithm outperforms the baseline algorithms. Specifically, under different maximum local computational capacities, the proposed algorithm decreases 24.29%–56.62% energy compared to the suboptimal baselines.

Original languageEnglish
Pages (from-to)17499-17513
Number of pages15
JournalIEEE Internet of Things Journal
Volume12
Issue number11
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • Collaborative inference
  • deep neural network (DNN) partitioning
  • deep reinforcement learning
  • edge intelligence
  • energy efficiency
  • resource allocation

Fingerprint

Dive into the research topics of 'Energy Optimization of Multitask DNN Inference in MEC-Assisted XR Devices: A Lyapunov-Guided Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this