PERSONALISATION OVERVIEW

Our Personalisation subtheme focuses on the ways in which non-state actors use data and AI to shape our online lives around our revealed interests and behaviours, in order to hold our attention and influence our behaviour. It falls into three research streams, focused on Algorithmic Amplification, Automated Influence, and Bias and Discrimination. 

Our Algorithmic Amplification stream includes both computational and qualitative research aimed at understanding the ways in which recommender systems direct attention around digital platforms, demonstrating, for example, how YouTube directs users to ever more extreme content, and the 'winner takes all' dynamics of attention flow online. We have advanced research projects on the economics of online attention, as well as on trust in social media, and new projects exploring the competition of online memes. Complementing these empirical projects, we are engaged in normative research on how fairness constrains the distribution of online attention, as well as precisely what the algorithmic amplification of online speech should aim at. 

Automated Influence is the process whereby digital intermediaries use recommender systems and dynamic design to change our behaviour—through targeted advertising, affect recognition, and 'dark patterns', for example. Our work again integrates computational, legal, political, and philosophical approaches. We have a well-developed research project on the law and politics of privacy, including empirical research on Australians' attitudes to privacy, as well as work exploring how data protection regulation fails to adequately protect consumers against manipulation by way of Automated Influence. We also have pursued in-depth philosophical inquiries into the moral standing of Automated Influence, including engaging with psychological research on its actual effectiveness. And we have a postdoc waiting to enter the country, who will work on developing privacy- and fairness-preserving recommender systems. 

Bias and Discrimination have been among the central topics in the 'AI Ethics' literature; we adopt a novel approach to the subject, including some sceptical work questioning whether putative necessary criteria for the fairness of statistical decisions actually fit that bill, as well as legal research considering how fairness as understood in the machine learning literature maps onto fairness in regulatory instruments like the GDPR. We are enriching the debate by bringing in perspectives from sociology and from theories of structural discrimination, and on the design side we are working closely with partners in the Gradient Institute and IAG to develop tools for the use of data and AI in insurance that achieve promised social benefits without exacerbating disparate impacts.