PhD Updates
We invite applications to join the project through the ANU PhD programs in computer science, law, philosophy, political science and international relations, or sociology. HMI PhD students will develop expertise in their home discipline, but will also build the wider skill set necessary to advance the morality, law and politics of data and AI.
Research Fellowships
We have one research fellow position available at present, and we’re always working to bring in more resources and more people, so if you see work on the foundations and implementation of democratically legitimate machine intelligence in your future, then do reach out!
HMI DAIS Seminars
Angela Zhou (Cornell) gave a talk on algorithmic fairness on the 10th of June 2021. Click through for more information.
Rumi Chunara (NYU) gave a talk on machine learning and health and equity on the 24th of June 2021. Click through for more information.
Brian Hedden (Australian National University) gave the first HMI DAIS Seminar of 2021. Click through for more information.
HMI has launched the Data, AI & Society (DAIS) public online seminar series to give a platform to new voices working to understand and develop democratically, constitutionally, and culturally legitimate data and AI systems. Seminars will be open to the public, find out more here.
Cierra Robson (Harvard University) gave the last HMI DAIS Seminar of the year. Click through for more information.
Naman Goel (Swiss Federal Institute of Technology) gave the twelfth HMI Data, AI and Society public seminar. Click through for more information or view HMI Dais recordings here.
New Partners
The Centre for Artificial Intelligence and Digital Ethics (CAIDE) facilitates cross-disciplinary research, teaching, and leadership on the ethical, regulatory, and legal issues relating to Artificial Intelligence (AI) and digital technologies at the University of Melbourne. Our research seeks to explore the impact, deployment, and governance of this emerging technology across society. Our approach is to combine legal, ethical, and social perspectives with technological expertise to examine the issues of these emerging technologies in a holistic manner. We examine issues of fairness, privacy, accountability, and transparency in this emerging technology to further our understanding, but also to guide the development and appropriate policy settings for effective use across society.
The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S Centre) is a new, cross-disciplinary, national research centre, which aims to create the knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making. The Centre combines social and technological disciplines in an international industry, research and civil society network that brings together experts from Australia, Europe, Asia and America. It will formulate world-leading policy and practice and inform public debate with the aim to reduce risks and improved outcomes in the priority domains of news and media, transport, social services and health.
This project brings together the expertise of the ACT Education Directorate, the Gradient Institute and the HMI team. The data gathered by the Education Directorate will be used in conjunction with causal analysis to better understand the factors that influence the educational outcomes of students.
Our goal is to learn from practitioners what are the real problems that they face when deploying data and AI, so that we ensure that our research is laser-focused on the problems that matter. We then want to maximise the impact of our research, by working with partners in government, industry and civil society. We aim to influence policy, to guide the ethical implementation of data and AI, and to help shape the next generation of democratically legitimate AI systems. If you can help us do that—and if we can help you—then get in touch.
As humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using AI to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will ‘moral automation’ diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral ‘ups killing’? Our project will use philosophy and social psychology to answer these questions.