Document de travail

Regret bound for Narendra-Shapiro bandit algorithms

Sébastien Gadat, Fabien Panloup et Sofiane Saadane

Résumé

Narendra-Shapiro (NS) algorithms are bandit-type algorithms that were introduced in the 1960s in view of applications to Psychology or clinical trials. The long time behavior of such algorithms has been studied in depth but it seems that few results exist in a non-asymptotic setting, which can be of primary interest for applications. In this paper, we focus on the study of the regret of NS-algorithms and address the following question: are the Narendra-Shapiro (NS) bandit algorithms competitive from this non-asymptotic point of view? In our main result, we show that some competitive bounds can be obtained in their penalized version (introduced in [14]). More precisely, up to a slight modification, the regret of the penalized two-armed bandit algorithm is uniformly bounded by C \sqrt{n} (where C is a positive constant made explicit in the paper). We also generalize existing convergence and rate of convergence results to the multi-armed case of the over-penalized bandit algorithm, including the convergence toward the invariant measure of a Piecewise Deterministic Markov Process (PDMP) after a suitable renormalization. Finally, ergodic properties of this PDMP are given in the multi-armed case.

Mots-clés

Regret; Stochastic Bandit Algorithms; Piecewise Deterministic Markov Processes;

Remplacé par

Fabien Panloup, Sofiane Saadane et Sébastien Gadat, « Regret bound for Narendra-Shapiro bandit algorithms », Stochastics, mai 2018, p. 886–926.

Référence

Sébastien Gadat, Fabien Panloup et Sofiane Saadane, « Regret bound for Narendra-Shapiro bandit algorithms », TSE Working Paper, n° 15-556, février 2015, révision mai 2016.

Voir aussi

Publié dans

TSE Working Paper, n° 15-556, février 2015, révision mai 2016