Morality and Machine Intelligence
Morality and Machine Intelligence
Seth Lazar, Heather Roff, Vincent Conitzer, Fiona Wollard, Christian List, Gabbrielle Johnson, Shamik Dasgupta, Tina Eliassi-Rad, Julia Haas.
The Morality and Machine Intelligence conference brought together academic leaders from institutes across the US, UK and Australia, from philosophy, social science and computer science to openly discuss their latest research on the ethics of machine intelligence. Held on the 22nd of August 2019, this workshop was a joint venture of HMI and 3Ai, funded by RSCS and RSSS.
Heather Roff (Johns Hopkins) kicked off the event with her paper ‘Expected Utilitarianism’, arguing that one of the prominent approaches to AI, Reinforcement Learning, favours a particular kind of normative framework: act utilitarianism.
Vincent Conitzer (Duke) followed with ‘How Artificial Intelligence Can Improve Human Moral Judgments’, proposing that we use AI to test our human judgements against our own moral standards and thereby improve ourselves morally.
Fiona Wollard (Southampton), in her ‘The New Trolley Problem: Driverless Cars and Deontological Distinctions’, demonstrated how the human version of the Trolley Problem invokes vital moral distinctions such as doing and allowing that are not applicable in the case of driverless cars.
Christian List (LSE), in ‘Representing Moral Judgments: The Reason-Based Approach’, revealed the limitations of both representing moral judgements for AI as a database and a universal betterness ordering (a la traditional utilitarianism), and argued for a ‘reasons structure’ which represents the properties that are morally relevant in each context and how choice-worthy different bundles of properties are.
Seth Lazar (ANU) presented his paper ‘The Value of Explanations’, where he asked why it matters that we explain our decisions, distinguishing between the instrumental and non-instrumental value of explanations and identifying how justification and explanation interact.
Gabbrielle Johnson (Claremont McKenna), in her ‘Canons of Algorithmic Inference: Feminist Theoretical Virtues in Machine Learning’, argued against the presumption that machine learning can and should be formally objective, proposing instead the adoption of feminist theoretical virtues and their corollary: the use of false positive equality as a measure of fairness.
Shamik Dasgupta (Berkeley), in ‘The Meta-Ethics of Artificial Intelligence: Are Machines Beholden to Normative Joints?’, proposed that there are no facts to discover about the ethics of AI, but rather it is a matter of decision.
Tina Eliassi-Rad (Northeastern) gave her paper ‘Just Machine Learning’, where she articulated how risk assessments enable efficiency instead of fairness in the hands of human decision-makers and proposed an alternative task definition whose goal is to provide more context to the human decision-maker.
Julia Haas (ANU), in her ‘Moral Gridworlds’, characterised a reinforcement learning-based framework for artificial moral cognition, whereby moral gridworlds train AIs to attribute subjective rewards and values to certain ‘moral’ states.