June 19, 2025, 09:55–16:30
Room Auditorium 5
Background and objective
The objective of the workshop, organized by the Department of Mathematics and Statistics is to foster the interest in the emerging developments in decision mathematics. The workshop provides an opportunity for PhD students, post-doctoral researchers, and faculty members to interact and discuss both theoretical and empirical contributions, with a special focus this year on deep learning, games, optimization, probabilities, and statistics. It will consist in 9 talks and will be held on Thursday June 19, 2025, 9:55-16:30 (UTC+2).
1. Colombe Becquart - "Invariant Coordinate Selection and Fisher discriminant subspace."
Invariant Coordinate Selection (ICS) is a multivariate technique that relies on the simultaneous diagonalization of two scatter matrices. It serves various purposes, including its use as a dimension reduction tool prior to clustering or outlier detection. ICS’s theoretical foundation establishes why and when the identified subspace should contain relevant information by demonstrating its connection with the Fisher discriminant subspace (FDS). These general results have been examined in detail primarily for specific scatter combinations within a two-cluster framework. In this study, we expand these investigations to include more clusters and scatter combinations. Our analysis reveals the importance of distinguishing whether the group centers matrix has full rank. In the full-rank case, we establish deeper connections between ICS and FDS. We provide a detailed study of these relationships for three clusters. Based on these expanded theoretical insights and supported by numerical studies, we conclude that ICS is indeed suitable for recovering the FDS under very general settings and cases of failure seem rare.
2. Joseph Hachem - "Extreme value estimation for heterogeneous heavy-tailed data."
In extreme value analysis, it has recently been shown that one can use a de-randomization trick, replacing a random threshold in the estimator of interest with its deterministic counterpart, in order to estimate several extreme risks simultaneously, but only in an i.i.d. context. In this talk, I will show how this method can be used to handle the estimation of several tail quantities (tail index, expected shortfall...) in general dependence/heteroskedasticity/heterogeneity settings, under a weighted L1 assumption on the discrepancy between the average distribution of the data and the prevailing distribution. In particular, estimating the expected shortfall above an extreme level is not yet doable with the current literature for heterogeneous data. I will then show a practical application of this work to disaster management.
3. Melissa González - "Percolation Games: Zero-sum stochastic games on Zd."
Percolation games are a class of zero-sum stochastic games in which a token moves across a d-dimensional lattice according to a transition function jointly controlled by two players, with spatially random payoffs. For a given initial token position, the n-stage game is defined as one where the payoff is the (random) average payoff over n stages. A central question is whether the value of the n-stage game converges almost surely as n tends to infinity. When the payoffs are spatially i.i.d. and the game is oriented -meaning the token tends to move in a fixed direction regardless of the players' actions- convergence is known to occur. What role do these hypotheses play? What happens if one of them is no satisfied? What other aspects of the game could we focus on when studying the long-duration game? In this talk, we will introduce this class of games and briefly explore these questions through a variety of examples in different settings. These examples will highlight both the challenges and the mathematical richness that arise when studying this model, while also showcasing connections with other areas of mathematics.
4. Thăng Lê Nhật - "Forward algorithm for a class of optimal stopping games."
In this talk, we present a new, efficient algorithm for computing the value function in zero-sum stopping games involving two players with opposing objectives. This framework can be seen as a game-theoretic extension of the “forward algorithm” for the one-player optimal stopping problem—originally introduced by Irle (2006) for discrete-time Markov chains and later revisited by Miclo and Villeneuve (2021) for continuous-time settings on general state spaces. Our focus is on games driven by continuous-time Markov chains on finite state space. We discuss the main ideas behind the proofs, analyze the number of iterations required for convergence, and provide several illustrative examples.
5. Camille Mondon - "ICS for complex data with application to outlier detection for density data."
Invariant coordinate selection (ICS) is a dimension reduction method, used as a preliminary step for clustering and outlier detection. It has been primarily applied to multivariate data. This work introduces a coordinate-free definition of ICS in an abstract Euclidean space and extends the method to complex data. Functional and distributional data are preprocessed into a finite-dimensional subspace. In the framework of Bayes Hilbert spaces, distributional data are smoothed into compositional spline functions, for example through the Maximum Penalised Likelihood method. We describe an outlier detection procedure for complex data and study the impact of some preprocessing parameters on the results. We compare our approach with other outlier detection methods through simulations, producing promising results in scenarios with a low proportion of outliers. ICS allows detecting abnormal climate events in a sample of daily maximum temperature distributions recorded across the provinces of Northern Vietnam between 1987 and 2016.
6. Léo Portales - "On the sample complexity of fixed support Wasserstein barycenters."
In this presentation I will introduce the notion of optimal transport barycenter with sparse support, explain its relevance and link with general Wasserstein barycenter and provide its sample complexity for general target measures with compact support and for various optimal transport divergences. I will do so by firstly giving some background on optimal transport and statistical learning theory and then provide our main results as well as a quick sketch of its proof.
7. Quoc-Tung Le - "Bilevel gradient methods and Morse parametric qualification."
Morse parametric qualification condition is a new condition that generalizes uniform strong convexity. In this new framework, we study bilevel gradient algorithms with two strategies: the single-step multi-step strategy, which involves a sequence of steps on the lower-level problems followed by one step on the upper-level problem, and a differentiable programming strategy that optimizes a smooth approximation of the bilevel problem. While the first is shown to be a biased gradient method on the problem with rich properties, the second, inspired by meta-learning applications, is less stable but offers simplicity and ease of implementation.
8. Bilel Bensaid - "Complexities of Armijo-like algorithms in Deep Learning context."
The classical Armijo backtracking algorithm achieves the optimal complexity for smooth functions like gradient descent but without any hyperparameter tuning. However, the smoothness assumption is not suitable for Deep Learning optimization. In this work, we show that some variants of the Armijo optimizer achieve acceleration and optimal complexities under assumptions more suited for Deep Learning: the (L 0 , L 1 ) smoothness condition and analyticity. New dependences on the smoothness constants and the initial gap are established. The results theoretically highlight the powerful efficiency of Armijo-like conditions for highly non-convex problems. If there is enough time, we will present a way to adapt the Armijo rule to mini-batch optimization without the drawbacks of variance reduction methods.
9. Dmitry Kamzolov - "OPTAMI: Global Superlinear Convergence of High-order Methods."
Second-order methods for convex optimization outperform first-order methods in terms of theoretical iteration convergence, achieving rates up to O(k-5) for highly-smooth functions. However, their practical performance and applications are limited due to their multi-level structure and implementation complexity. In this paper, we present new results on high-order optimization methods, supported by their practical performance. First, we show that the basic high-order methods, such as the Cubic Regularized Newton Method, exhibit global superlinear convergence for µ-strongly star-convex functions, a class that includes µ-strongly convex functions and some non-convex functions. Theoretical convergence results are both inspired and supported by the practical performance of these methods. Secondly, we propose a practical version of the Nesterov Accelerated Tensor method, called NATA. It significantly outperforms the classical variant and other high-order acceleration techniques in practice. The convergence of NATA is also supported by theoretical results. Finally, we introduce an open-source computational library for high-order methods, called OPTAMI. This library includes various methods, acceleration techniques, and subproblem solvers, all implemented as PyTorch optimizers, thereby facilitating the practical application of high-order methods to a wide range of optimization problems. We hope this library will simplify research and practical comparison of methods beyond first-order.