Workshop

Doctoral Workshop on Decision Mathematics and Statistics

June 16, 2022, 09:55–17:05

Room Auditorium 5

Background and objective 

The objective of the workshop, organized by MADS is to foster the interest in the emerging developments in decision mathematics. The workshop provides an opportunity for PhD students, post-doctoral researchers, and faculty members to interact and discuss both theoretical and empirical contributions, with a special focus this year on data science, finance, games, optimization, and statistics. It will consist in 9 talks and will be held on Thursday June 16, 2022, 9:55-17:05 (UTC+2).

Organizing Committee:

Abdelaati Daouia,Sébastien Gadat, Christine Thomas-Agnan

Conference Secretariat:

Aline Soulié, Corinne Vella

 

Yasser Abbas - "Understanding World Economy Dynamics Based on Indicators and Events"

Studying the content and impact of news articles has been a recurring interest in economics, finance, psychology, and political and media literature over the 40 years. Most of these offerings focus on specific qualities or outcomes related to their textual data, which limits their applicability and scope. Instead, we use novel datasets that adopt a more holistic approach to data gathering and text mining, allowing texts to speak for themselves without shackling them with presupposed goals or biases. Our data consists of networks of nodes representing key performance indicators of companies, industries, and countries. These nodes are linked by edges weighted by the number of times the concepts were connected in media articles between January 2018 and January 2022. We study these networks through the lens of graph theory and use modularity-based clustering, in the form of the Leiden algorithm, to group nodes into information-filled communities. We showcase the potential of such data by first exploring the evolution of our dynamic networks and their metrics over time, which highlights their ability to tell coherent and concise stories about the world economy. To further demonstrate the untapped power of dynamic networks, we develop a new early warning system using the tools of extreme value theory. This adaptive system manages to detect Covid-19 as a threat in December 2019.

Maurizio D'Andrea - "Constant Step-Size Multiplicative Weights Algorithm in Coordination Games."

We study n-player repeated games where each agent competes against the others using the multiplicative weights update (MWU) with constant step size. The MWU algorithms are a class of algorithms largely studied in the literature, particularly in online learning and decision making. In the MWU, at each step, all players select an action based on their mixed strategies, and then they receive information about their vector payoff and update their mixed strategies. The paper aims to study the behaviour of the mixed strategy sequence generated by the MWU algorithm in the long run. We first prove that the existence of a "weakly stable" Nash equilibrium is a necessary condition to have an almost surely convergence (for example, we can not obtain convergence in Matching Pennies). Hence, we prove that in games with a strict Nash equilibrium, the strategies induced by the MWU converge to the boundary of the strategies set. Finally, using a similar argument, we prove that, under some assumption, in the case of coordination games, the strategies induced by the MWU converge almost surely to a strict Nash equilibrium.

Ryan Boustany -  "Complexity of nonsmooth algorithmic differentiation"

Algorithmic Differentiation (AD) is a pillar in machine learning, backpropagation implementing the ``cheap gradient'' principle. Its computational complexity is well understood for smooth functions, but the question is mostly open in the nonsmooth case. There is a variety of nonsmooth first order oracles generalizing gradients: Clarke subdifferential, conservative gradients, directional derivatives, lexicographic subdifferential…  Our main thesis is that conservative gradients have favorable computational properties, we support this claim with three main results. We extend Baur-Strassen theorem to conservative gradients proving a nonsmooth ``cheap conservative gradient principle'' for backpropagation. We relate the complexity of computing a large number of directional derivatives to that of matrix multiplication, a fundamental limitation to potential efficiency improvements of forward AD for this task. Finally, while computing a subgradient is feasible for large classes of functions, computing two distinct subgradients is NP-Hard for simple neural networks, illustrating the hardness Clarke subgradient enumeration.

Lukas Dargel - "A generalized framework for estimating spatial econometric interaction models"

In the framework of spatial econometric interaction models for origin-destination flows, we develop an estimation method for the case when the list of origins may be distinct from the list of destinations, and when the origin-destination matrix may be sparse. The proposed model resembles a weighted version of the one of LeSage and Pace (2008) and we are able to retain most of the efficiency gains associated with the matrix form estimation, which we illustrate for the maximum likelihood estimator. We also derive computationally feasible tests for the coherence of the estimation results and present an efficient approximation of the conditional expectation of the flows.

Ngoc Tâm Lê - Subgradient sampling for nonsmooth and nonconvex minimization

We focus on a stochastic risk minimization problem in the nonsmooth and nonconvex setting which applies for instance to the training of deep learning models. A popular way in the machine learning community to deal with this problem is to use  stochastic gradient descent (SGD). This method aims to decrease the risk function combining both subgradient sampling and backpropagation, which is an efficient implementation of the chain-rule formula. Due to the incompatibility of these operations in the nonsmooth world, the SGD can generates spurious critical points in the optimization landscape which does not guarantee the convergence of the objective function to a meaningful value. We will  present  in this talk a model of gradient for nonsmooth function called Conservative Gradients, that is compatible with subgradient sampling and backpropagation, and which allows to obtain convergence results for SGD in the nonsmooth nonconvex setting. By means of definable geometry, we will emphasize that functions used in machine learning are naturally endowed with the simple geometric structure of piecewise polynomial functions. We then show in this setting that spurious critical points are hardly seen when performing SGD in practice.

Estelle Medous - " A prediction approach for statistical data integration in survey sampling."

In a finite population setting, it is possible to improve the efficiency of estimators based on a probability sample by using non-probability big data sources. However, the target variable may not be observable in the big data sources, while the auxiliary information present in these sources may not be measured in the probability sample. In such a situation, new estimators can be proposed with a prediction approach. These estimators are either design-based, model-based, or cosmetic. Their properties in terms of bias and efficiency are studied using some theoretical and simulation results. The interest of the new proposals is illustrated in the context of the French postal service, where the objective is to estimate the monthly postal traffic through a survey of the mailmen rounds while taking advantage of the database containing information on the automatically processed postal mail.

Etienne De Montbrun - "Convergence of Optimistic Gradient Descent Ascent in Bilinear Games"

Maxmin problem are harder to solve than minimization problem, because of some cycling phenomena. Of course, this problem arises in Generative Adversarial Networks, but even for easier problems, as bilinear games. In our work, we consider the problem of finding Nash equilibrium in the case of bilinear maxmin optimization. In particular, we study the Optimistic Gradient Descent Ascent (OGDA) algorithm, a tweaked version of the famous Gradient Descent Ascent. We studied how optimism brings stability to the convergence process for such games, and avoid diverging cycling phenomena. We generalized existing results to a bigger set of games. We also had a look at the behaviour of Projected OGDA for 2-players zero-sum games, and see how the projection interact with OGDA.

Dana Pizarro - "Dynamic Resource Constrained Reward Collection Problems:  Unified Model and Analysis."

Dynamic resource allocation problems arise under a variety of settings and have been studied across disciplines such as Operations Research and Computer Science. In this talk I will introduce a unifying model for a very large class of dynamic optimization problems, that we call dynamic resource constrained reward collection (DRC). I will show that this class encompasses a variety of disparate and classical dynamic optimization problems such as dynamic pricing with capacity constraints, dynamic bidding with budgets, network revenue management, online matching, or order fulfillment, to name a few. Furthermore,  the class of DRC problems, while highly general, is amenable to analysis. In particular, I will characterize the performance of the fluid certainty equivalent control heuristic for this class. Notably, this very general result recovers as corollaries some existing specialized results, generalizes other existing results by weakening the assumptions required, but also yields new results in specialized settings for which no such characterization was available. As such, the DRC class isolates some common features of a broad class of problems and offers a new object of analysis.

Amadou Yoro Thiam - "Efficient subsidized prices in the commodity spot market."

This paper analyzes the effects of a subsidized price proposal on the production of an agricultural commodity. Facing price risk and production risk, we assume the producers are risk-averse and control their production policy to maximize the expected utility value of the output. We suppose that the producers have the opportunity to produce and sell the output at a subsidized price fixed early and guaranteed by a state-related agency. Using stochastic optimal control approach, we derive the optimal production policy and provide some meaningful comparative statics which can allow one to better understand the impact of subsidies on a commodity production. We also prove there exists a unique fixed price which provides the same utility level to producers by entering into the market or by selling output at a subsidized price. Next, in order to hedge the market risk transferred from producers to the state-related agency, we show how to fix a subsidized price so that the expected loss for the agency will be zero.