Posts in Algorithmic-Ethics
Belief Revision For Growing Awareness

This paper is about how an agent should rationally update her probabilistic beliefs when her conceptual space (modelled as an algebra of propositions) grows. This is not like typical cases of learning, which are cases in which an agent comes to revise her beliefs for propositions about which she was already aware. We investigate whether the learning rules for the typical cases of learning can be extended to the case of conceptual growth.

Read More
Why Time Discounting Should be Exponential: A Reply to Callender

Here HMI CI Katie Steele argues that on a certain way of modelling an agent's preferences and understanding her "time preferences", exponential time discounting is uniquely rational. However, if "time preferences" are understood differently, then exponential time discounting is not uniquely rational. This helps in understanding why the prescription of exponential time discounting has many defenders but also many detractors.

Read More
Fully Expanding Moral Theories

This talk was given at a conference on Holly Smith’s book, Making Morality Work, held at Rutgers on October 18, 2019. I argued that Making Morality Work poses the problem that moral theories must be 'usable', but then offers a solution that only partly solves it. I offered a way to extend the solution, but argued that even that only partly solves the problem, and that we can’t stop there.

Read More
An Analysis of Actual Causation

We put forth an analysis of actual causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: c causes e just in case there is a causal model that is uninformative on e and in which e will occur if c does. Notably, our analysis has no need to consider what would happen if c were absent. We show that our analysis captures more causal scenarios than any counterfactual account to date.

Read More
Learning Domain-Independent Planning Heuristics with Hypergraph Networks

The paper extends a deep learning model known as graph neural networks and uses it to learn generalised heuristics. We show that these heuristics generalise to problems with different goals, larger problems, and even problems from different domains than those we trained on. This is the first paper that successfully learns domain-independent heuristics.

Read More
How to be imprecise and yet immune to sure loss

This paper considers strategies for making decisions in the face of severe uncertainty, when one's beliefs are best represented by a set of probability functions over the possible states of the world (as opposed to a single precise probability function). The question is whether one can employ a decision strategy that does not have the disadvantage of making one vulnerable to sure loss in sequential-decision scenarios.

Read More