Working paper

Discretisation of Langevin diffusion in the weak log-concave case

Marelys Crespo

Abstract

The Euler discretisation of Langevin diffusion, also known as Unadjusted Langevin Algorithm, is commonly used in machine learning for sampling from a given distribution µ ∝ e−U . In this paper we investigate a potential U : Rd −→ R which is a weakly convex function and has Lipschitz gradient. We parameterize the weak convexity with the help of the Kurdyka-Lojasiewicz (KL) inequality, that permits to handle a vanishing curvature settings, which is far less restrictive when compared to the simple strongly convex case. We prove that the final horizon of simulation to obtain an ε approximation (in terms of entropy) is of the order ε−1d1+2(1+r)2Poly(log(d), log(ε−1)), where the parameter r is involved in the KL inequality and varies between 0 (strongly convex case) and 1 (limiting Laplace situation).

Keywords

Unadjusted Langevin Algorithm; Entropy; Weak convexity; Rate of convergence;

Reference

Marelys Crespo, Discretisation of Langevin diffusion in the weak log-concave case, TSE Working Paper, n. 24-1506, February 2024.

See also

Published in

TSE Working Paper, n. 24-1506, February 2024