Our Algorithmic Ethics subproject aims to make progress on the fundamental questions that must be answered in order to incorporate moral considerations into automated systems that can make significant state-changes without intervening human control. We’ll be answering foundational questions in moral philosophy and theoretical AI, while also aiming to operationalise these discoveries in real AI systems, for example in care robots and autonomous vehicles.
Read MoreJenny Davis, Apryl Williams, and Michael Yang displace "fair" machine learning with an intersectional reparative approach in this article published by Big Data & Society.
Read MoreIn this paper, we study two unbiased estimators for the Fisher information matrix (FIM) in the context of deep learning. In particular, we derive closed form expressions for the estimator variance. We bound their variances, analyze the impact of deep neural network structures, and discuss our results in the context of deep learning.
Read MoreComputers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr’s famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.
Read MorePamela Robinson presented ‘Moral Uncertainty and Artificial Intelligence’ to Effective Altruism UQ (University of Queensland). Click through for more information.
Read MorePamela Robinson presented ‘Moral Disagreement and Artificial Intelligence’ at AIES'21. Click through for more information.
Read MoreWe put forth an analysis of causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: an event c causes another event e just in case there is a causal model uninformative on c and e in which c makes a difference as to the occurrence of e. We show that our analysis captures more causal scenarios than the other counterfactual accounts to date.
We provide a proof of principle of a new method for addressing the ethics of autonomous vehicles, the Data-Theories Method, in which vehicle crash data is combined with ethical theory to provide a guide to action for autonomous vehicle algorithm design.
Read MorePredictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness.
Read MoreWe aim to devise a Ramsey Test analysis of actual causation. Our method is to define a strengthened Ramsey Test for causal models. Unlike the accounts of Halpern and Pearl (2005) and Halpern (2015), the resulting analysis deals satisfactorily with both overdetermination and conjunctive scenarios.
Read MoreIn this paper, we put forth an analysis of sensitivity which aims to discern individual from merely statistical evidence. We argue that sensitivity is not to be understood as a factive concept, but as a purely epistemic one. Our resulting analysis of epistemic sensitivity gives rise to an account of legal proof on which a defendant is only found liable based on epistemically sensitive evidence.
Read MoreThe entry is an overview of Decision Theory and philosophical work on this topic. We did revisions to this entry this year (2020). One major addition was to include recent work on (un)conscious "unawareness" about the epistemic possibilities, or the full suite of possible consequences associated with one's decision options.
Read More