A reinforcement learning framework for autonomous cell activation and customized energy-efficient resource allocation in C-RANs

Guolin Sun*, Gordon Owusu Boateng, Hu Huang, Wei Jiang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes for energy consumption and QoS optimization in wireless networks. We formulate the problem as fractional power control with bandwidth adaptation and full power control and bandwidth allocation models and set up a Q-learning model to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and network densities. Extensive simulations are conducted to show the effectiveness of our proposed solution compared to existing schemes.

Original languageEnglish
Pages (from-to)3821-3841
Number of pages21
JournalKSII Transactions on Internet and Information Systems
Volume13
Issue number8
DOIs
Publication statusPublished - 2019
Externally publishedYes

Keywords

  • Autonomous cell activation
  • Cloud radio access network
  • Reinforcement learning
  • Resource allocation

Fingerprint

Dive into the research topics of 'A reinforcement learning framework for autonomous cell activation and customized energy-efficient resource allocation in C-RANs'. Together they form a unique fingerprint.

Cite this