TY - JOUR
T1 - A reinforcement learning framework for autonomous cell activation and customized energy-efficient resource allocation in C-RANs
AU - Sun, Guolin
AU - Boateng, Gordon Owusu
AU - Huang, Hu
AU - Jiang, Wei
N1 - Publisher Copyright:
© 2019 KSII.
PY - 2019
Y1 - 2019
N2 - Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes for energy consumption and QoS optimization in wireless networks. We formulate the problem as fractional power control with bandwidth adaptation and full power control and bandwidth allocation models and set up a Q-learning model to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and network densities. Extensive simulations are conducted to show the effectiveness of our proposed solution compared to existing schemes.
AB - Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes for energy consumption and QoS optimization in wireless networks. We formulate the problem as fractional power control with bandwidth adaptation and full power control and bandwidth allocation models and set up a Q-learning model to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and network densities. Extensive simulations are conducted to show the effectiveness of our proposed solution compared to existing schemes.
KW - Autonomous cell activation
KW - Cloud radio access network
KW - Reinforcement learning
KW - Resource allocation
UR - http://www.scopus.com/inward/record.url?scp=85075040084&partnerID=8YFLogxK
U2 - 10.3837/tiis.2019.08.001
DO - 10.3837/tiis.2019.08.001
M3 - Article
AN - SCOPUS:85075040084
SN - 1976-7277
VL - 13
SP - 3821
EP - 3841
JO - KSII Transactions on Internet and Information Systems
JF - KSII Transactions on Internet and Information Systems
IS - 8
ER -