Humans and machines regularly interact as part of daily life in contemporary societies. It is critical to understand the nature of these relationships. This presentation addresses role-taking in human-AI teams. Role-taking is a process of putting the self in the shoes of another, understanding the world from the other's perspective. We use an experimental design to determine how actively humans role-take with AI as compared with role-taking activation when encountering other humans.
Read MoreWe put forth an analysis of actual causation. The analysis centers on the notion of a causal model that provides only partial information as to which events occur, but complete information about the dependences between the events. The basic idea is this: c causes e just in case there is a causal model that is uninformative on e and in which e will occur if c does. Notably, our analysis has no need to consider what would happen if c were absent. We show that our analysis captures more causal scenarios than any counterfactual account to date.
Read MoreWhat does a rational agent or an AI learn from a conditional? Günther (2018) proposed a method for the learning of indicative conditionals. Here, we amend the method by a distinction between indicative and subjunctive conditionals. As a result, the method covers the learning of subjunctive conditionals as well.
Read MoreI discussed ways in which seemingly value neutral decisions that technology workers make can have major moral implications, and how to think critically and proactively about them.
Read MoreVideo conferences are now king. But a popular technology could be putting corporate privacy at risk with little power to prevent it.
Read MoreThe talk explored the data privacy issues stemming for the use of smart contracts and considered the effects of the General Data Protection Regulation and the Australian Privacy Act comparatively. In particular, by focusing on smart contracts the presentation explored how distributed ledger tech causes serious privacy headaches through its prioritization of the elimination of trust issues.
Read MoreThis talk was given to the effective altruism society at ANU on October 1, 2019. I described ethical problems associated with the project of designing ethical self-driving cars, what makes the project especially difficult, what we might do about it, and why those concerned with doing the most good should care.
Read MoreProposed legislation will open the way to sharing the vast quantities of data held by the Australian government, without needing our consent. While it promises to enable the smooth service delivery citizen-consumers have come to expect, it also challenges traditional roles of privacy, consent and trust in the public sphere.
Read More
The adoption of emotion detection technology is rapidly expanding. Facebook in particular has received significant media attention in this regard. But how does the continued development and deployment of this technology in an online setting fit within the current EU regulatory framework?
Read MoreChristian Barry and Seth Lazar consider what justifies requiring some people to bear costs for the sake of others, in the public health response to COVID-19.
Read MoreIn this conference paper, Dr Will Bateman presented a technically-embedded analysis of doctrinal legal issue which arise in the use of artificial intelligence (AI) by regulators, government administrators and other legal actors. The paper was delivered to the collected Justices of the Supreme Court of New South Wales, with special guest Justices from the High Court of Australia and the Supreme Court of the United Kingdom.
Read MoreIn this submission, Dr Will Bateman (with Dr Julia Powles) responded to the Australian Human Rights Commission’s Technology and Human Rights Discussion Paper. The submission focused on three areas of reform: the use of self-regulation and cost-benefit analyses in the regulation of human rights; the remedial force of human rights law; and the powers given to any ‘AI Safety Commissioner’.
Read More