Posts in Human-AI Interaction
Role-Taking in Human-Human and Human-AI Interaction

Humans and machines regularly interact as part of daily life in contemporary societies. It is critical to understand the nature of these relationships. This presentation addresses role-taking in human-AI teams. Role-taking is a process of putting the self in the shoes of another, understanding the world from the other's perspective. We use an experimental design to determine how actively humans role-take with AI as compared with role-taking activation when encountering other humans.

Read More
Virtue Signalling: Reassuring Observers of Machine Behaviour

We propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.

Read More
Attributions of Ethical Responsibility by Artificial Intelligence Practitioners

Through in-depth interviews with AI practitioners in Australia, this paper examines perceptions of accountability and responsibility among those who make autonomous systems. We find that AI practitioners envision themselves as mediating technicians, enacting others’ high-level plans and then relinquishing control of the products they produce. Findings highlight “ethics” in AI as a challenge that distributes among complex webs of human and mechanized subjects.

Read More
Moral Skill and Artificial Intelligence (External Grant)

As humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using Artificial Intelligence (AI) to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will 'moral automation' diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral 'upskilling'? Our project, funded by the Templeton World Charity Foundation, will use philosophy, social psychology, and computer science to answer these questions.

Read More