Document de travail

When majority rules, minority loses: bias amplification of gradient descent

François Bachoc, Jérôme Bolte, Ryan Boustany et Jean-Michel Loubes

Résumé

Despite growing empirical evidence of bias amplification in machine learning, its theoretical foundations remain poorly understood. We develop a formal framework for majority-minority learning tasks, showing how standard training can favor majority groups and produce stereotypical predictors that neglect minority-specific features. Assuming population and variance imbalance, our analysis reveals three key findings: (i) the close proximity between “full-data” and stereotypical predictors, (ii) the dominance of a region where training the entire model tends to merely learn the majority traits, and (iii) a lower bound on the additional training required. Our results are illustrated through experiments in deep learning for tabular and image classification tasks.

Référence

François Bachoc, Jérôme Bolte, Ryan Boustany et Jean-Michel Loubes, « When majority rules, minority loses: bias amplification of gradient descent », TSE Working Paper, n° 25-1641, mai 2025.

Voir aussi

Publié dans

TSE Working Paper, n° 25-1641, mai 2025