Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr’s famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.
Read MorePamela Robinson presented ‘Moral Uncertainty and Artificial Intelligence’ to Effective Altruism UQ (University of Queensland). Click through for more information.
Read MorePamela Robinson presented ‘Moral Disagreement and Artificial Intelligence’ at AIES'21. Click through for more information.
Read MoreWe put forth an analysis of causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: an event c causes another event e just in case there is a causal model uninformative on c and e in which c makes a difference as to the occurrence of e. We show that our analysis captures more causal scenarios than the other counterfactual accounts to date.
User engagement with data privacy and security through consent banners has become a ubiquitous part of interacting with internet services. While previous work has addressed consent banners from either interaction design, legal, and ethics-focused perspectives, little research addresses the connections among multiple disciplinary approaches, including tensions and opportunities that transcend disciplinary boundaries. In this paper, we draw together perspectives and commentary from HCI, design, privacy and data protection, and legal research communities, using the language and strategies of "dark patterns" to perform an interaction criticism reading of three different types of consent banners. Our analysis builds upon designer, interface, user, and social context lenses to raise tensions and synergies that arise together in complex, contingent, and conflicting ways in the act of designing consent banners. We conclude with opportunities for transdisciplinary dialogue across legal, ethical, computer science, and interactive systems scholarship to translate matters of ethical concern into public policy.
Read MoreWe provide a proof of principle of a new method for addressing the ethics of autonomous vehicles, the Data-Theories Method, in which vehicle crash data is combined with ethical theory to provide a guide to action for autonomous vehicle algorithm design.
Read MorePredictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness.
Read MorePlanning under partial obervability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for problems with continuous observation spaces remains challenging. Most on-line solvers rely on discretising the observation space or artificially limiting the number of observations that are considered during planning to compute tractable policies. In this paper we propose a new on-line POMDP solver, called Lazy Belief Extraction for Continuous POMDPs (LABECOP), that combines methods from Monte-Carlo-Tree-Search and particle filtering to construct a policy reprentation which doesn't require discretised observation spaces and avoids limiting the number of observations considered during planning. Experiments on three different problems involving continuous observation spaces indicate that LABECOP performs similar or better than state-of-the-art POMDP solvers.
Read MoreWe aim to devise a Ramsey Test analysis of actual causation. Our method is to define a strengthened Ramsey Test for causal models. Unlike the accounts of Halpern and Pearl (2005) and Halpern (2015), the resulting analysis deals satisfactorily with both overdetermination and conjunctive scenarios.
Read MoreCongratulations to Hanna Kurniawati and her co-writers David Hsu and Wee Sun Lee (NUS) on being awarded this years RSS Test of Time Award!
Read MoreAngela Zhou (Cornell) gave a talk on algorithmic fairness on the 10th of June 2021. Click through for more information.
Read MoreRumi Chunara (NYU) gave a talk on machine learning and health and equity on the 24th of June 2021. Click through for more information.
Read More