TY - JOUR
T1 - Extended UCB Policies for Multi-armed Bandit Problems
AU - Liu, Keqin
AU - Zheng, Tianshuo
AU - Zhou, Zhi Hua
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2025.
PY - 2026/1
Y1 - 2026/1
N2 - The multi-armed bandit (MAB) problems are widely studied in fields of operations research, stochastic optimization, and reinforcement learning. In this paper, we consider the classical MAB model with heavy-tailed reward distributions and introduce the extended robust UCB policy, which is an extension of the results of Bubeck et al. (IEEE Trans Inf Theory 59:7711–7717, 2013) and Lattimore (Adv Neural Inf Process Syst, 30:1–10, 2017) that are further based on the pioneering idea of UCB policies e.g. (Auer et al. in Mach Learn, 47:235–256, 2002). The previous UCB policies require some strict conditions on reward distributions, which can be difficult to guarantee in practical scenarios. Our extended robust UCB generalizes Lattimore’s seminary work (for moments of orders and) to arbitrarily chosen as long as the two moments have a known controlled relationship, while still achieving the optimal regret growth order, thus providing a broadened application area of UCB policies for heavy-tailed reward distributions. Furthermore, we achieve a near-optimal regret order without any knowledge of the reward distributions as long as their p-th moments exist for some. Finally, we briefly present our earlier work on light-tailed reward distributions for a complete illustration of the amazing simplicity and power of UCB policies.
AB - The multi-armed bandit (MAB) problems are widely studied in fields of operations research, stochastic optimization, and reinforcement learning. In this paper, we consider the classical MAB model with heavy-tailed reward distributions and introduce the extended robust UCB policy, which is an extension of the results of Bubeck et al. (IEEE Trans Inf Theory 59:7711–7717, 2013) and Lattimore (Adv Neural Inf Process Syst, 30:1–10, 2017) that are further based on the pioneering idea of UCB policies e.g. (Auer et al. in Mach Learn, 47:235–256, 2002). The previous UCB policies require some strict conditions on reward distributions, which can be difficult to guarantee in practical scenarios. Our extended robust UCB generalizes Lattimore’s seminary work (for moments of orders and) to arbitrarily chosen as long as the two moments have a known controlled relationship, while still achieving the optimal regret growth order, thus providing a broadened application area of UCB policies for heavy-tailed reward distributions. Furthermore, we achieve a near-optimal regret order without any knowledge of the reward distributions as long as their p-th moments exist for some. Finally, we briefly present our earlier work on light-tailed reward distributions for a complete illustration of the amazing simplicity and power of UCB policies.
KW - heavy-tailed bandit
KW - moment information
KW - order-optimal regret
KW - upper confidence bound
UR - https://www.scopus.com/pages/publications/105026239014
U2 - 10.1007/s10994-025-06943-6
DO - 10.1007/s10994-025-06943-6
M3 - Article
AN - SCOPUS:105026239014
SN - 0885-6125
VL - 115
JO - Machine Learning
JF - Machine Learning
IS - 1
M1 - 7
ER -