Understanding and Designing Democratic AI
Understanding and Designing Democratic AI
The first question raised whenever one discusses the incorporation of values into AI systems is: 'whose values'? But this question is asked as though this were the only domain of social life in which moral disagreement obtained. Of course it is not. And the only viable solution developed in human history for accommodating moral disagreement without relying on mass oppression of those who disagree is democracy. Of course, democracy itself is a vessel that can be filled with many things: democracies must respect fundamental human rights, cultural group rights, indigenous land claims, and much more. But the key point is that the process of selecting the values that we design into AI systems should be just the same as our process for resolving other evaluative conflicts—we need to adapt democratic institutions to play this role, as we must adapt them continuously for all social and technological changes, but democracy should be our lode star.
It's a platitude that to make real progress on understanding and designing any complex system, we need to draw on different domains of knowledge, and different experiences. No single research project or approach has a monopoly on truth, and the greatest progress will be made by trying as many different avenues as we can. At the ANU we have a research project on democratic AI, Humanising Machine Intelligence, which proceeds by bringing together world-leading experts and rising stars in a diverse array of fields across computer science, the humanities, and the social sciences. We wanted to attempt a genuinely collaborative multidisciplinary research project, in which we didn't just treat scholars from other fields as consultants, to help each of us answer problems defined from within their own discipline, but in which we would instead frame our research questions together, drawing on all of our expertise. But framing the right problem isn't just a matter of bringing the right researchers together. During our first year we have also worked closely with partners in government, industry and civil society to understand where they see the need for the kind of research that we can do. We have also helped build the community of researchers working on Data, AI and Society in Australia, and linked up to like-minded groups in the world's other leading universities, ensuring that our approach remains consistently at the global cutting edge.
Through this approach, we've settled on four key research themes, which we think constitute the most pressing sets of questions where our distinctive approach can make a real difference. In each case, our methodology is the same: we want to use the tools of our constituent disciplines to give a comprehensive picture of each problem. This means empirical work to understand the opportunities and risks associated with a particular application of data and AI, foundational work to enable a more robust moral diagnosis of where we are now, and to establish the goals that we should be aiming at, as we seek to develop democratic AI, and design work to build both technical and sociotechnical systems that realise those goals, and mitigate the risks of ever-increasing automation in all its forms. Our constituent fields—philosophy, computer science (including machine learning, planning, robotics, logic), law, sociology, political science, international political economy—together enable us to get a full picture of each of our research areas, and enable us to provide a full stack of well-grounded advice and suggestions to any of our partners in practice. The field of data and AI is of course huge, with many very significant interests, and our role as part of the national university is to help address the great questions raised by our technological moment without fear or favour, in an entirely independent way. You can read about our four projects here.