This sub-project is about the exercise of power through data and AI systems, and about how it can be rendered both just and legitimate. Among other themes, we explore how current practices of using data and AI to perform the functions of the administrative state fall short of existing standards of public law, we ask foundational questions about the nature of explanations, and the circumstances when they are morally called for, and we provide both policy advice and technical expertise in the development of just and legitimate data and AI systems.
Read MoreOne of AI's most commercially successful applications—and one of the drivers of innovation—has been to use data and inferences about you to provide a personalised user experience: product recommendations, micro-targeted adverts, tailored newsfeeds, tailored prices. This automated personalisation has one goal: to affect your behaviour, in the pursuit of either profit or power. In this subproject, we explore how personalisation is changing our social world now, what the goals of personalisation should be, and how to realise those goals in real socio-technical systems.
Read MoreIf we designed AI systems that were morally perfect in a vacuum, but didn't take into account the predictable way people react when interacting and using those systems, then we would end up with very bad AI systems. We need to take our limitations and biases into account when designing AI systems, but also think about how working with data and AI will change us.
Read MoreOur Algorithmic Ethics subproject aims to make progress on the fundamental questions that must be answered in order to incorporate moral considerations into automated systems that can make significant state-changes without intervening human control. We’ll be answering foundational questions in moral philosophy and theoretical AI, while also aiming to operationalise these discoveries in real AI systems, for example in care robots and autonomous vehicles.
Read MoreWhat does it mean to understand and design democratic AI? Seth Lazar introduces the methodological approach of the HMI team, and explains the goal of designing democratically legitimate machine intelligence.
Read More