At HawaiiCon 2020, Claire Benn discussed her chapter ‘Playtest and the Power of Virtual Reality: Are Our Fears Real?’ which is part of the recently released Black Mirror and Philosophy: Dark Reflections book. Click through for more information.
Read MoreA virtual workshop on Trust and Safety that was held on the 2nd of June 2020. This was a joint event with the Trusted Autonomous Systems Defence Cooperative Research Centre, Data61, and 3AI. Click through for more information.
Read MoreAnthony Asher, Adam Druissi, Seth Lazar, and Tiberio Caetano presented the online seminar ‘Data Ethics — A Virtual Session’ on the 13th of October 2020. Click through for more information.
Read MoreThis book offers a conceptual update of affordance theory that introduces the mechanisms and conditions framework, providing a vocabulary and critical perspective for the analysis and design of sociotechnical systems.
Read MoreMind Design III will update Haugeland's classic reader on philosophy of artificial intelligence for the modern era. It will contain a mix of classic and contemporary readings, along with a new introduction to contextualise the topic for students. Expected publication date Q1 2021.
Read MoreHumans and machines regularly interact as part of daily life in contemporary societies. It is critical to understand the nature of these relationships. This presentation addresses role-taking in human-AI teams. Role-taking is a process of putting the self in the shoes of another, understanding the world from the other's perspective. We use an experimental design to determine how actively humans role-take with AI as compared with role-taking activation when encountering other humans.
Read MoreClaire Benn and Seth Lazar recorded an interview with Rashna Farrukh for the Philosopher’s Zone podcast on Radio National. The theme: moral skill and artificial intelligence. Does the automation of moral labour threaten to diminish our capacity for moral judgment, much as automation in other areas has negatively impacted human skill?
Read MoreWe propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.
Read MoreThrough in-depth interviews with AI practitioners in Australia, this paper examines perceptions of accountability and responsibility among those who make autonomous systems. We find that AI practitioners envision themselves as mediating technicians, enacting others’ high-level plans and then relinquishing control of the products they produce. Findings highlight “ethics” in AI as a challenge that distributes among complex webs of human and mechanized subjects.
Read MorePlaytest demonstrates that that when our fantasies feel real, and have the power to hurt, they are no longer just a game. Virtual reality can build a bridge between what seems real and what is real, and this means its power to scare us silly is not just novel: it’s revolutionary.
Read MoreAs humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using Artificial Intelligence (AI) to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will 'moral automation' diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral 'upskilling'? Our project, funded by the Templeton World Charity Foundation, will use philosophy, social psychology, and computer science to answer these questions.
Read MoreThe Morality and Machine Intelligence conference brought together academic leaders from institutes across the US, UK and Australia, from philosophy, social science and computer science to openly discuss their latest research on the ethics of machine intelligence.
Read More