Low-complexity algorithm for restless bandits with imperfect observations

Keqin Liu*, Richard Weber, Chengzhong Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a class of restless bandit problems that finds a broad application area in reinforcement learning and stochastic optimization. We consider N independent discrete-time Markov processes, each of which had two possible states: 1 and 0 (‘good’ and ‘bad’). Only if a process is both in state 1 and observed to be so does reward accrue. The aim is to maximize the expected discounted sum of returns over the infinite horizon subject to a constraint that only M(<N) processes may be observed at each step. Observation is error-prone: there are known probabilities that state 1 (0) will be observed as 0 (1). From this one knows, at any time t, a probability that process i is in state 1. The resulting system may be modeled as a restless multi-armed bandit problem with an information state space of uncountable cardinality. Restless bandit problems with even finite state spaces are PSPACE-HARD in general. We propose a novel approach for simplifying the dynamic programming equations of this class of restless bandits and develop a low-complexity algorithm that achieves a strong performance and is readily extensible to the general restless bandit model with observation errors. Under certain conditions, we establish the existence (indexability) of Whittle index and its equivalence to our algorithm. When those conditions do not hold, we show by numerical experiments the near-optimal performance of our algorithm in the general parametric space. Furthermore, we theoretically prove the optimality of our algorithm for homogeneous systems.

Original languageEnglish
JournalMathematical Methods of Operations Research
DOIs
Publication statusPublished - 5 Sept 2024

Keywords

  • Continuous state space
  • Index policy
  • Observation errors
  • Restless bandits

Fingerprint

Dive into the research topics of 'Low-complexity algorithm for restless bandits with imperfect observations'. Together they form a unique fingerprint.

Cite this