Posts in Algorithmic-Ethics
ALGORITHMIC ETHICS OVERVIEW

Our Algorithmic Ethics subproject aims to make progress on the fundamental questions that must be answered in order to incorporate moral considerations into automated systems that can make significant state-changes without intervening human control. We’ll be answering foundational questions in moral philosophy and theoretical AI, while also aiming to operationalise these discoveries in real AI systems, for example in care robots and autonomous vehicles.

Read More
The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems

Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr’s famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.

Read More
Difference-Making Causation

We put forth an analysis of causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: an event c causes another event e just in case there is a causal model uninformative on c and e in which c makes a difference as to the occurrence of e. We show that our analysis captures more causal scenarios than the other counterfactual accounts to date.


Read More
On statistical criteria of algorithmic fairness

Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness.

Read More
Epistemic Sensitivity and Evidence

In this paper, we put forth an analysis of sensitivity which aims to discern individual from merely statistical evidence. We argue that sensitivity is not to be understood as a factive concept, but as a purely epistemic one. Our resulting analysis of epistemic sensitivity gives rise to an account of legal proof on which a defendant is only found liable based on epistemically sensitive evidence.

Read More