TY - JOUR
T1 - Revised reinforcement learning based on anchor graph hashing for autonomous cell activation in cloud-RANs
AU - Sun, Guolin
AU - Zhan, Tong
AU - Gordon Owusu, Boateng
AU - Daniel, Ayepah Mensah
AU - Liu, Guisong
AU - Jiang, Wei
N1 - Publisher Copyright:
© 2019 Elsevier B.V.
PY - 2020/3
Y1 - 2020/3
N2 - Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes to balance energy consumption and QoS satisfaction in wireless networks. We formulate the cell activation problem as a Markov decision process and set up a revised reinforcement learning model based on K-means clustering and anchor-graph hashing to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and user mobility. Extensive simulations are conducted to show the effectiveness of our proposed solution compared with existing schemes.
AB - Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes to balance energy consumption and QoS satisfaction in wireless networks. We formulate the cell activation problem as a Markov decision process and set up a revised reinforcement learning model based on K-means clustering and anchor-graph hashing to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and user mobility. Extensive simulations are conducted to show the effectiveness of our proposed solution compared with existing schemes.
KW - Anchor graph hashing
KW - Autonomous cell activation
KW - Cloud radio access networks
KW - K-means clustering
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85072968484&partnerID=8YFLogxK
U2 - 10.1016/j.future.2019.09.044
DO - 10.1016/j.future.2019.09.044
M3 - Article
AN - SCOPUS:85072968484
SN - 0167-739X
VL - 104
SP - 60
EP - 73
JO - Future Generation Computer Systems
JF - Future Generation Computer Systems
ER -