Combining Transformer based Deep Reinforcement Learning with Black-Litterman Model for Portfolio Optimization

Research output: Contribution to journalArticlepeer-review

Abstract

As a model-free algorithm, deep reinforcement learning (DRL) agent learns and makes decisions by interacting with the
environment in an unsupervised way. In recent years, DRL algorithms have been widely applied by scholars for portfolio
optimization in consecutive trading periods, since the DRL agent can dynamically adapt to market changes and does not
rely on the specification of the joint dynamics across the assets. However, typical DRL agents for portfolio optimization
cannot learn a policy that is aware of the dynamic correlation between portfolio asset returns. Since the dynamic correlations among portfolio assets are crucial in optimizing the portfolio, the lack of such knowledge makes it difficult for the
DRL agent to maximize the return per unit of risk, especially when the target market permits short selling (i.e., the US
stock market). In this research, we propose a hybrid portfolio optimization model combining the DRL agent and the BlackLitterman (BL) model to enable the DRL agent to learn the dynamic correlation between the portfolio asset returns and
implement an efficacious long/short strategy based on the correlation. Essentially, the DRL agent is trained to learn the
policy to apply the BL model to determine the target portfolio weights. In this model, we formulate a specific objective
function based on the environment’s reward function, which considers the return, risk, and transaction scale of the
portfolio. Our DRL agent is trained by propagating the objective function’s gradient to the policy function of our DRL
agent. To test our DRL agent, we construct the portfolio based on all the Dow Jones Industrial Average constitute stocks.
Empirical results of the experiments conducted on real-world United States stock market data demonstrate that our DRL
agent significantly outperforms various comparison portfolio choice strategies and alternative DRL frameworks by at least
42% in terms of accumulated return. In terms of the return per unit of risk, our DRL agent significantly outperforms various comparative portfolio choice strategies and alternative strategies based on other machine learning frameworks.
Original languageEnglish
Pages (from-to)20111-20146
Number of pages36
JournalNeural Computing and Applications
Volume36
Issue number32
DOIs
Publication statusAccepted/In press - May 2024

Keywords

  • Black-Litterman model
  • Deep reinforcement learning
  • Portfolio optimization
  • Transformer neural network

Fingerprint

Dive into the research topics of 'Combining Transformer based Deep Reinforcement Learning with Black-Litterman Model for Portfolio Optimization'. Together they form a unique fingerprint.

Cite this