HMI DAIS 11 - Public online seminar, 9am 12 November 2020 AEST
Ruobin Gong (Rutgers University) and Marcello Di Bello (Arizona State University), will give the eleventh HMI Data, AI and Society public seminar.
Ruobin Gong is Assistant Professor of Statistics at Rutgers University. Her research interests lie at the theoretical foundations of generalized Bayesian methodologies, imprecise probabilities and random sets, Dempster-Shafer theory of belief function, statistical inference and computation with private data, as well as the ethical implications of aspects of modern data science. Her research on Bayesian methods for differential privacy is supported by the National Science Foundation. Ruobin received her Hon. B. Sc. in cognitive psychology from the University of Toronto, and her Ph.D. in statistics from Harvard University in 2018, advised by Xiao-Li Meng and Arthur P. Dempster. She currently serves as an associate editor for the Harvard Data Science Review.
Marcello Di Bello is Assistant Professor of Philosophy in the School Historical, Philosophical and Religious Studies at Arizona State University. He is interested in topics at the intersection of philosophy of law and epistemology, such as statistics in the law, risk and decision-making, algorithmic fairness, evidence and probability. His research examines how reliance on quantitative methods pose challenges and opportunities to the criminal justice system. He was a fellow of the School of Social Science at the Institute for Advanced Study in Princeton. He holds a M.Sc. in Logic from the University of Amsterdam and a Ph.D. in Philosophy from Stanford University.
Resolving Algorithmic Fairness
Algorithms are now widely used to streamline decisions in different contexts: insurance, health care, criminal justice. As some have shown, algorithms can make disproportionately more errors to the detriment of disadvantaged minorities compared to other groups. The literature in computer science has articulated different criteria of algorithmic fairness, each plausible in its own way. Yet, several impossibility theorems show that no algorithm can satisfy more than a few of these fairness criteria at the same time. We set out to investigate why this is so. In this talk, we first show that all criteria of algorithm fairness can be simultaneously satisfied under a peculiar and idealized set of premises. These include assumptions about access to information, representativeness of training data, capacity of the model, and crucially the construct of individual risk as the quantity to be assessed by the algorithm. When these assumptions are relaxed, we invoke a multi-resolution framework to understand the deterioration of the algorithm's performance in terms of both accuracy and fairness. We illustrate our results using a suite of simulated studies. While our findings do not contradict existing impossibility theorems, they shed light on the reasons behind such failure and offer a path towards a quantitative and principled resolution.