Abstract
Portfolio optimization faces challenges from non-stationary market dynamics and the limitations of existing methods in handling complex data relationships and preserving prior knowledge. Standard Deep Reinforcement Learning (DRL) approaches often rely on static rewards, struggle to integrate multiple view types beyond expected returns, and discard historical information non-optimally. This paper introduces the Deep Multi-View Factor Entropy Pooling (DMVFEP) framework to address these issues. DMVFEP utilizes a novel multiview architecture with specialized neural networks including Transformer for returns, asymmetric MLP with attention for volatility, MLP for factors, integrated into a TD3 DRL agent. Crucially, the agent learns to generate dynamic, statedependent market views, rather than portfolio weights directly. These views are then systematically integrated with prior beliefs using Factor Entropy Pooling (FEP), a principled method that minimizes information loss. Empirical evaluation on Dow Jones Industrial Average-derived assets demonstrates DMVFEP's substantial outperformance, yielding significantly higher absolute and risk-adjusted returns compared to benchmark strategies and leading DRL models, validating its ability to capture market complexities and adapt dynamically. Ablation studies confirm the significant contribution of each view component.
Original language | English |
---|---|
Title of host publication | 2025 21st International Conference on Intelligent Computing |
Publication status | Published - 2025 |