HMI DAIS 14 - On Statistical Criteria of Algorithmic Fairness

HMI DAIS 14 - Public online seminar, 12pm 1 April 2021 AEST

Brian Hedden (Australian National University) gave the first HMI DAIS Seminar of 2021.

Brian Hedden is Associate Professor of Philosophy at ANU. Before joining ANU, he was a PhD student at MIT, a postdoc at Oxford, and a faculty member at the University of Sydney.

His research focuses on epistemology and decision theory, as well as related areas of ethics and political philosophy. He has recently written about our moral obligations in collective action problems like climate change mitigation, the use of statistical evidence in the law, and statistical criteria of fairness for algorithmic predictions in the criminal justice system and elsewhere.

He has been awarded two major competitive grants from the Australian Research Council: a Discovery Early Career Researcher Award for a project on group rationality, and a Discovery Project (with Mark Colyvan) for a project on legal evidence. He is the author of Reasons without Persons (OUP 2015) as well as articles in Mind, Journal of Philosophy, Ethics, Nous, Philosophy and Phenomenological Research, and Philosophy and Public Affairs.

Seminar Title: On Statistical Criteria of Algorithmic Fairness

Abstract: Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness.

HMI DAIS recordings can be viewed here.

EventsHMI StaffDAIS