HMI DAIS 08 - Deborah Hellman, University of Virginia
HMI DAIS 08 - Deborah Hellman, University of Virginia
Public online seminar, 9am 1 October 2020 AEST
Deborah Hellman, University of Virginia, gave the eighth HMI Data, AI and Society public seminar.
Deborah Hellman is the David Lurton Massee, Jr. Professor of Law at the University of Virginia School of Law. Her two main scholarly interests are discrimination and corruption. She is the author of WHEN IS DISCRIMINATION WRONG? (Harvard Univ. Press, 2008) and A Theory of Bribery, 38 Cardozo L. Rev. 1947 (2017) which won the 2019 Fred Berger Memorial Prize from the American Philosophical Association. Among her articles that are specifically related to the subject of her talk are: Measuring Algorithmic Fairness, 106 VA. L. REV. 811 (2020) and Sex, Causation and Algorithms, __ WASH. U. L. REV.__ (forthcoming, 2020), among others. She was elected to the American Law Institute in 2019.
‘Big Data and Compounding Injustice’
Abstract: In this paper, Deborah argues that the fact that a person has been a victim of prior injustice affects how others should treat her. In particular, this fact generates reasons that others should consider in deciding how they interact with her. This article’s moral claim is that the fact that an action will compound a prior injustice counts as a reason against doing that action. For ease of exposition, she calls these reasons to act or refrain from acting so as not to compound prior injustice The Anti-compounding Injustice principle or ACI. This principle, if it exists, is likely be relevant to analyzing the moral issues raised by the increasing influence of so-called “big data” and its combination with the computational power of machine learning and artificial intelligence (AI). Decisions that rely on big data and machine learning are similar in kind to decisions which, also evidence-based, are grounded in less comprehensive information and where the processes used to analyze that data to make predictions about the future are less powerful. Where big data driven decisions differ is with regard to degree. If more types of decisions are data-driven in this way and these decisions are grounded in more data, then these new technological tools may compound more injustice than was possible before. If so, this is of moral concern.se a way to raise awareness, push back and contest in the presence of powerful service providers.
HMI Dais recordings can be viewed here.