We put forth an analysis of causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: an event c causes another event e just in case there is a causal model uninformative on c and e in which c makes a difference as to the occurrence of e. We show that our analysis captures more causal scenarios than the other counterfactual accounts to date.
We aim to devise a Ramsey Test analysis of actual causation. Our method is to define a strengthened Ramsey Test for causal models. Unlike the accounts of Halpern and Pearl (2005) and Halpern (2015), the resulting analysis deals satisfactorily with both overdetermination and conjunctive scenarios.
Read MoreSarita Rosenstock (course convener), Pamela Robinson and Mario Guenther will be running a course on philosophy, AI and society during Semester 2, 2021.
Read MoreJoin us at the 4th AAAI/ACM Conference on AI, Ethics, and Society - 19-21 May 2021!
Read MoreShould decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.
Read MoreIn this paper, we put forth an analysis of sensitivity which aims to discern individual from merely statistical evidence. We argue that sensitivity is not to be understood as a factive concept, but as a purely epistemic one. Our resulting analysis of epistemic sensitivity gives rise to an account of legal proof on which a defendant is only found liable based on epistemically sensitive evidence.
Read MoreOne of the open questions in Bayesian epistemology is how to rationally learn from indicative conditionals (Douven, 2016). Eva et al. (2019) propose a strategy to resolve this question. They claim that their strategy provides a "uniquely rational response to any given learning scenario". We show that their updating strategy is neither very general nor always rational. Even worse, we generalize their strategy and show that it still fails. Bad news for the Bayesians.
Read MoreShould we use large-scale facial recognition systems? This article in The Conversation distinguishes between facial recognition and face surveillance and argues that we should demand a moratorium on face surveillance.
Read MoreWe put forth an analysis of actual causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: c causes e just in case there is a causal model that is uninformative on e and in which e will occur if c does. Notably, our analysis has no need to consider what would happen if c were absent. We show that our analysis captures more causal scenarios than any counterfactual account to date.
Read MoreWhat does a rational agent or an AI learn from a conditional? Günther (2018) proposed a method for the learning of indicative conditionals. Here, we amend the method by a distinction between indicative and subjunctive conditionals. As a result, the method covers the learning of subjunctive conditionals as well.
Read MoreThe third instalment of the international conference series Decision Theory and the Future of AI brought together renowned experts of decision theory and AI to discuss the concerns raised by algorithmic decision making.
Read MoreUnderstanding causation is one of the crucial frontiers of discovery in artificial intelligence, where we increasingly depend on machine learning models that inadequately represent causal relations. Philosophical work analysing the nature of causation lays crucial foundations both for advancing AI itself, and for the many deployments of causal reasoning necessary to develop democratically legitimate AI.
Read More