Posts in Presentation
Fully Expanding Moral Theories

This talk was given at a conference on Holly Smith’s book, Making Morality Work, held at Rutgers on October 18, 2019. I argued that Making Morality Work poses the problem that moral theories must be 'usable', but then offers a solution that only partly solves it. I offered a way to extend the solution, but argued that even that only partly solves the problem, and that we can’t stop there.

Read More
Role-Taking in Human-Human and Human-AI Interaction

Humans and machines regularly interact as part of daily life in contemporary societies. It is critical to understand the nature of these relationships. This presentation addresses role-taking in human-AI teams. Role-taking is a process of putting the self in the shoes of another, understanding the world from the other's perspective. We use an experimental design to determine how actively humans role-take with AI as compared with role-taking activation when encountering other humans.

Read More
An Analysis of Actual Causation

We put forth an analysis of actual causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: c causes e just in case there is a causal model that is uninformative on e and in which e will occur if c does. Notably, our analysis has no need to consider what would happen if c were absent. We show that our analysis captures more causal scenarios than any counterfactual account to date.

Read More
Smart Contracts and Data Privacy

The talk explored the data privacy issues stemming for the use of smart contracts and considered the effects of the General Data Protection Regulation and the Australian Privacy Act comparatively. In particular, by focusing on smart contracts the presentation explored how distributed ledger tech causes serious privacy headaches through its prioritization of the elimination of trust issues.

Read More
Mathematical and Causal Faces of Explainable AI

In this talk, I introduce a philosophically-informed framework for the varieties of explanations used for building transparent AI decisions. This paper has been presented at Halıcıoğlu Data Science Institute and Department of Philosophy (University of California San Diego), Department of Philosophy (Stanford University and University of Washington), Department of Logic and Philosophy of Science (University of California, Irvine)

Read More
Virtue Signalling: Reassuring Observers of Machine Behaviour

We propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.

Read More