A Multi-Agent Reinforcement Learning-Based Data-Driven Method for Home Energy Management

Xu Xu, Youwei Jia*, Yan Xu, Zhao Xu, Songjian Chai, Chun Sing Lai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

266 Citations (Scopus)

Abstract

This paper proposes a novel framework for home energy management (HEM) based on reinforcement learning in achieving efficient home-based demand response (DR). The concerned hour-ahead energy consumption scheduling problem is duly formulated as a finite Markov decision process (FMDP) with discrete time steps. To tackle this problem, a data-driven method based on neural network (NN) and Q-learning algorithm is developed, which achieves superior performance on cost-effective schedules for HEM system. Specifically, real data of electricity price and solar photovoltaic (PV) generation are timely processed for uncertainty prediction by extreme learning machine (ELM) in the rolling time windows. The scheduling decisions of the household appliances and electric vehicles (EVs) can be subsequently obtained through the newly developed framework, of which the objective is dual, i.e., to minimize the electricity bill as well as the DR induced dissatisfaction. Simulations are performed on a residential house level with multiple home appliances, an EV and several PV panels. The test results demonstrate the effectiveness of the proposed data-driven based HEM framework.

Original languageEnglish
Article number8981876
Pages (from-to)3201-3211
Number of pages11
JournalIEEE Transactions on Smart Grid
Volume11
Issue number4
DOIs
Publication statusPublished - Jul 2020
Externally publishedYes

Keywords

  • Q-learning algorithm
  • Reinforcement learning
  • data-driven method
  • demand response
  • finite Markov decision process
  • home energy management
  • neural network

Fingerprint

Dive into the research topics of 'A Multi-Agent Reinforcement Learning-Based Data-Driven Method for Home Energy Management'. Together they form a unique fingerprint.

Cite this