Posts tagged Mario
Difference-Making Causation

We put forth an analysis of causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: an event c causes another event e just in case there is a causal model uninformative on c and e in which c makes a difference as to the occurrence of e. We show that our analysis captures more causal scenarios than the other counterfactual accounts to date.


Read More
Algorithmic and human decision making: for a double standard of transparency

Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.

Read More
Epistemic Sensitivity and Evidence

In this paper, we put forth an analysis of sensitivity which aims to discern individual from merely statistical evidence. We argue that sensitivity is not to be understood as a factive concept, but as a purely epistemic one. Our resulting analysis of epistemic sensitivity gives rise to an account of legal proof on which a defendant is only found liable based on epistemically sensitive evidence.

Read More
Bayesians Still Don't Learn from Conditionals

One of the open questions in Bayesian epistemology is how to rationally learn from indicative conditionals (Douven, 2016). Eva et al. (2019) propose a strategy to resolve this question. They claim that their strategy provides a "uniquely rational response to any given learning scenario". We show that their updating strategy is neither very general nor always rational. Even worse, we generalize their strategy and show that it still fails. Bad news for the Bayesians.

Read More
An Analysis of Actual Causation

We put forth an analysis of actual causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: c causes e just in case there is a causal model that is uninformative on e and in which e will occur if c does. Notably, our analysis has no need to consider what would happen if c were absent. We show that our analysis captures more causal scenarios than any counterfactual account to date.

Read More
Causation in Terms of Production

Understanding causation is one of the crucial frontiers of discovery in artificial intelligence, where we increasingly depend on machine learning models that inadequately represent causal relations. Philosophical work analysing the nature of causation lays crucial foundations both for advancing AI itself, and for the many deployments of causal reasoning necessary to develop democratically legitimate AI.

Read More