Designing Democratic AI
The first question raised whenever one discusses the incorporation of values into AI systems is: 'whose values'? But this question is asked as though this were the only domain of social life in which moral disagreement obtains. Of course it is not. And the only viable solution developed in human history for accommodating moral disagreement without relying on mass oppression of those who disagree is democracy. Of course, democracy itself is a vessel that can be filled with many things: democracies must respect fundamental human rights, cultural group rights, indigenous land claims, and much more. But the key point is that the process of selecting the values that we design into AI systems should be just the same as our process for resolving other evaluative conflicts—we need to adapt democratic institutions to play this role, as we must adapt them continuously for all social and technological changes, but democracy should be our lode star.
It's a platitude that to make real progress on understanding and designing complex systems, we must draw on different domains of knowledge, and different experiences. No single research project or approach has a monopoly on truth, and the greatest progress will be made by trying as many different avenues as we can. HMI unites world-leading experts and rising stars in computer science, the humanities, and the social sciences. Ours is a genuinely collaborative multidisciplinary research project, in which we didn't just treat scholars from other fields as consultants, to help each of us answer problems defined from within their own discipline, but instead frame our research questions together, drawing on all of our expertise. But framing the right problem isn't just a matter of colloquy among academics. We also work closely with partners in government, industry and civil society. And we have helped build the community of researchers working on Data, AI and Society in Australia, and around the world.
We've settled on four key research themes, which we think constitute the most pressing questions where we can make a difference. In each case, we use the tools of our constituent disciplines to give a comprehensive picture of each problem. This means empirical work to understand the opportunities and risks associated with particular applications of data and AI, foundational work to enable a more robust moral diagnosis of where we are now, and to establish the goals that we should be aiming at, and design work to build both technical and socio-technical systems that realise democratic AI. Our constituent fields—philosophy, computer science, law, sociology, political science, international political economy—afford a full picture of each of our research areas, and enable us to provide a full stack of well-grounded advice to our partners in practice. We are part of the national university. Our role is to address the great questions raised by our technological moment without fear or favour, in an entirely independent way.