November 10, 2021, 12:30–13:30
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality and take for granted the scope, significance, and ethical acceptability of existing inequalities. In this talk I will introduce the concept of “bias preservation” as a means to assess the compatibility of fairness metrics used in machine learning against the notions of formal and substantive equality. The fundamental aim of EU non-discrimination law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to ‘level the playing field’ and achieve substantive rather than merely formal equality. Based on this, I will introduce a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of substantive equality. Specifically, I will distinguish between ‘bias preserving’ and ‘bias transforming’ fairness metrics. This classification system is intended to bridge the gap between notions of equality, non-discrimination law, and decisions around how to measure fairness and bias machine learning. Bias transforming metrics are essential to achieve substantive equality in practice. I will conclude by introducing a bias preserving metric ‘Conditional Demographic Disparity’ which aims to reframe the debate around AI fairness, shifting it away from which is the right fairness metric to choose, and towards identifying ethically, legally, socially, or politically preferable conditioning variables according to the requirements of specific use cases.