Research

AUTOMATING GOVERNANCE

Data and AI are increasingly used—by states and digital platforms—to exercise power over us. What does it mean for that power to be used justly and legitimately? How can we design socio-technical systems that enable legitimate AI?

PERSONALISATION

The most sophisticated AI systems in the world ensure that your every moment online is tailored to you: personalised media, news, ads, prices. What are the consequences for democratic societies? Can we achieve serendipitous recommendations without creating new and troubling power relations?

ALGORITHMIC ETHICS

AI systems can increasingly make significant state changes without intervening human influence. We need to design these systems to take our values into account. But which values? And how can we translate them into algorithmic form?

HUMAN-AI INTERACTION

We fall into predictable errors when we interact with AI; and over time, those interactions change us. What cognitive and other biases should designers of AI systems account for? And how do we avoid ‘moral outsourcing’ in favour of AI systems that make us better moral agents?

DarioVeruariBB1.jpg

Automating Governance

This topic area focuses on how the state and state-like entities use data and AI to exercise power over people. Our goal is to identify the risks and opportunities associated with these practices—an exercise in legal and moral diagnosis—then to understand what we should be aiming at, and then to design sociotechnical systems that achieve these objectives. Within this thematic area, one stream of research focuses on the implications of AI for public law, the other focuses on the broader question of how data and AI lead us to rethink questions about the authority of states and state-like entities (such as digital intermediaries).

On public law, our work encompasses doctrinal investigations of just how principles of Australian public law constrain government use of algorithms, as well as surveys of the public use of algorithms in jurisdictions around the world. It also includes political philosophical work on the moral foundations of administrative law, and work drawing on both political philosophy and philosophy of science to understand why it is so important that those in power can explain the decisions that they have made, either to those directly affected, or to those on whose behalf they are acting. We have explored how to incorporate legally mandated indeterminacy in algorithmically delivered public policy, as well as the moral foundations of discretion in the exercise of power. On the design side, we have developed detailed theoretical work to improve the explainability of planning systems that use model-based diagnosis, as well as developing model law for the use of algorithms by governments.

The second stream of research zooms out from the focus on administrative law to consider other dimensions of the exercise of power by states and non-state entities using data and AI systems. It includes historical perspectives on how states have always constructed their authority through reliance on control over particular kinds of information, as well as on the role of predictive power and anticipatory governance in political theory—unifying the role of predictions in, for example, economic forecasting with the contemporary political pre-eminence of data science. It includes a rethinking of political philosophy through the lens of 'Automatic Authorities'—automated systems whose authority over us we have automatically accepted—which itself leads to new analyses of power and its justification. On the design side, we are working on criteria for the assessment of algorithmic tools designed to answer the most basic question of democratic politics: the boundary problem of redistricting.

DarioVeruariBB2.jpg

Personalisation

Our Personalisation subtheme focuses on the ways in which non-state actors use data and AI to shape our online lives around our revealed interests and behaviours, in order to hold our attention and influence our behaviour. It falls into three research streams, focused on Algorithmic Amplification, Automated Influence, and Bias and Discrimination.

Our Algorithmic Amplification stream includes both computational and qualitative research aimed at understanding the ways in which recommender systems direct attention around digital platforms, demonstrating, for example, how YouTube directs users to ever more extreme content, and the 'winner takes all' dynamics of attention flow online. We have advanced research projects on the economics of online attention, as well as on trust in social media, and new projects exploring the competition of online memes. Complementing these empirical projects, we are engaged in normative research on how fairness constrains the distribution of online attention, as well as precisely what the algorithmic amplification of online speech should aim at.

Automated Influence is the process whereby digital intermediaries use recommender systems and dynamic design to change our behaviour—through targeted advertising, affect recognition, and 'dark patterns', for example. Our work again integrates computational, legal, political, and philosophical approaches. We have a well-developed research project on the law and politics of privacy, including empirical research on Australians' attitudes to privacy, as well as work exploring how data protection regulation fails to adequately protect consumers against manipulation by way of Automated Influence. We also have pursued in-depth philosophical inquiries into the moral standing of Automated Influence, including engaging with psychological research on its actual effectiveness. And we have a postdoc waiting to enter the country, who will work on developing privacy- and fairness-preserving recommender systems.

Bias and Discrimination have been among the central topics in the 'AI Ethics' literature; we adopt a novel approach to the subject, including some sceptical work questioning whether putative necessary criteria for the fairness of statistical decisions actually fit that bill, as well as legal research considering how fairness as understood in the machine learning literature maps onto fairness in regulatory instruments like the GDPR. We are enriching the debate by bringing in perspectives from sociology and from theories of structural discrimination, and on the design side we are working closely with partners in the Gradient Institute and IAG to develop tools for the use of data and AI in insurance that achieve promised social benefits without exacerbating disparate impacts.

DarioVeruariBB3.jpg

Algorithmic Ethics

Our Algorithmic Ethics subtheme takes as its starting point that if we're going to design AI systems that reflect our values as democratic societies, we have to figure out how to either train AI systems to learn normative goals and constraints, or else encode those goals and constraints into those systems. Either approach presupposes that it is possible to represent complex normative theories in terms that are computationally accessible and infer optimal decisions that comply with these normative theories in a computationally tractable manner. It is an open question whether this is possible; answering this question requires addressing foundational topics in decision theory and normative ethics, as well as in AI. We have therefore worked on translating formal representations of moral theories into computational languages, and assessing their degree of complexity, as well as the other requirements for operationalising them in realistic contexts (e.g. which data an agent would need in order to choose appropriately). We have considered the question of whether AI systems can act ethically, or whether we should aim primarily for them to be aligned with our chosen values, as well as explored the question of just which values they should be aligned with. We have pursued advanced work in sequential moral decision theory, which has brought philosophical approaches to decision theory closer together with the approach adopted in AI planning. In tandem, we have worked on developing foundations for a theory of the ethics of quantum computing. We are also pursuing foundational work in AI, for example on integrating symbolic and learning approaches to AI in order to realise value-aligned AI systems that are more trustworthy and easier to explain. And we have sought to build on these insights in the design of robots and other autonomous systems, developing standards for evaluating the safety of autonomous vehicles in collaboration with the Assuring Autonomy International Programme in the UK, showing how different moral theories can be represented by AVs, and developing new approaches to the design of strategically compassionate robots.

DarioVeruariBB4.jpg

Human-AI Interaction

The fourth of our original research themes, The Ethics of Human-AI Interaction, starts from the premise that in the design of data and AI systems we must take into account the predictable ways in which people will use or misuse those systems, and in particular the predictable cognitive biases which will shape our misuse. We must also attend to the ways in which using new technologies reshapes us, as people—the hammer shapes the hand. This involves empirical work considering how, for example, we systematically misattribute responsibility when we work in human-machine teams, as well as how we irrationally defer to automated systems even when we have sufficient reason to be sceptical of their veracity. We are also exploring how working in human-machine teams impacts on our capacity for moral behaviour and judgment, considering in particular how our practices of role-taking change when we work alongside AI systems, as well as the social implications of outsourcing decisions that require us to exercise moral judgment to automated systems. To better understand these problems, we have engaged in foundational work on the affordances of technological (and other) artefacts, as well as considering the nature and components of moral skill. On the design side, we have developed novel approaches to AI planning that take into account the importance not only of acting in value-aligned ways, but of assuring observers that the agent is complying with those norms (by avoiding ambiguous paths). We are actively developing, in partnership with Microsoft and IAG, responsible design workshops that implement foundational research on affordances and role-taking to help AI developers and researchers think through the social impacts of their work. We are developing data visualisation techniques that work for people as we are, sensitive to our predictable cognitive biases. And we are studying and implementing co-design and technology translation workshops with underserved communities.

DarioVeruariBB5.jpg