EdTech, data privacy, children, and children's rights.
Read MoreThis talk was given at a conference on Holly Smith’s book, Making Morality Work, held at Rutgers on October 18, 2019. I argued that Making Morality Work poses the problem that moral theories must be 'usable', but then offers a solution that only partly solves it. I offered a way to extend the solution, but argued that even that only partly solves the problem, and that we can’t stop there.
Read MoreIn this talk, I argue for the practical problems of a counterfactual theory of mathematical explanations in sciences.
Read MoreIn this talk, I discuss for the bridging role of mathematics in empirical sciences as a reliable connecting scheme in our explanatory reasoning from lower-level to higher-level phenomena. I support this discussion by analyzing two explanations in biology and physics.
Read MoreHumans and machines regularly interact as part of daily life in contemporary societies. It is critical to understand the nature of these relationships. This presentation addresses role-taking in human-AI teams. Role-taking is a process of putting the self in the shoes of another, understanding the world from the other's perspective. We use an experimental design to determine how actively humans role-take with AI as compared with role-taking activation when encountering other humans.
Read MoreWe put forth an analysis of actual causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: c causes e just in case there is a causal model that is uninformative on e and in which e will occur if c does. Notably, our analysis has no need to consider what would happen if c were absent. We show that our analysis captures more causal scenarios than any counterfactual account to date.
Read MoreWhat does a rational agent or an AI learn from a conditional? Günther (2018) proposed a method for the learning of indicative conditionals. Here, we amend the method by a distinction between indicative and subjunctive conditionals. As a result, the method covers the learning of subjunctive conditionals as well.
Read MoreI discussed ways in which seemingly value neutral decisions that technology workers make can have major moral implications, and how to think critically and proactively about them.
Read MoreThe talk explored the data privacy issues stemming for the use of smart contracts and considered the effects of the General Data Protection Regulation and the Australian Privacy Act comparatively. In particular, by focusing on smart contracts the presentation explored how distributed ledger tech causes serious privacy headaches through its prioritization of the elimination of trust issues.
Read MoreThis talk was given to the effective altruism society at ANU on October 1, 2019. I described ethical problems associated with the project of designing ethical self-driving cars, what makes the project especially difficult, what we might do about it, and why those concerned with doing the most good should care.
Read MoreIn this talk, I introduce a philosophically-informed framework for the varieties of explanations used for building transparent AI decisions. This paper has been presented at Halıcıoğlu Data Science Institute and Department of Philosophy (University of California San Diego), Department of Philosophy (Stanford University and University of Washington), Department of Logic and Philosophy of Science (University of California, Irvine)
Read MoreWe propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.
Read More