Humanising Machine Intelligence

Background art website 3 copy.jpg

just by design

New technologies always bear the stamp of their designers' values. For machine intelligence, they're deeply etched in the code. 

AI sees the world through the data that we provide and curate. Its choices reflect our priorities. Its unintended consequences voice our indifference.

Machine intelligence cannot be morally neutral. We must choose: try to design moral machine intelligence, which sees the world fairly and chooses justly; or else build the next industrial revolution on immoral machines.

To design moral machine intelligence, we must understand how our perspectives reflect power and prejudice. We must understand our priorities, and how to represent them in terms a machine can act on. And we must break new ground in machine learning and AI research.

The HMI project exists to unite some of the Australian National University’s world-leading researchers in the social sciences, philosophy, and computer science around these shared goals. It is an ANU Grand Challenges Program.

Our Project Our People

Background art website 2 final.jpg

towards moral machines

In the discovery stage, we will formulate the design problem. What are the social impacts of machine intelligence now? Where is it advancing social justice and collective benefit; where is it undermining them? When it fails, why is it failing? Will better AI solve the problem, or are there some social problems that AI cannot fix?

Ethical machine intelligence faces fundamental challenges. Practical progress must await new foundations. What must be true about morality, for moral machine intelligence to be possible? How can we represent moral reasons in a language computers understand? How ought moral machines choose under risk and uncertainty?  

On those foundations, we will build the project’s design phase. Together with partners in industry and government, we will use case studies to show that moral machine intelligence is possible. Our goal: systems that not only reliably select morally defensible options under risk and uncertainty, but do so in a way that can be justified to those most affected.

We will launch in 2019, when you’ll be able to read our findings here

Background art website 4 copy.jpg

who we are


HMI is a highly focused research project, with a hand-picked team of leading researchers. Each core member adds something unique; all are committed to working closely together to make substantial progress towards moral machine intelligence.

Philosophy and the social sciences are not a bolt-on for the HMI project, but will drive our research agenda alongside computer science. Through from different disciplines, we share a common expertise in probabilistic decision-making. Our work, though cross-disciplinary, will meet the highest standards of excellence in each component discipline.

This is the core team. But we are looking for the next generation of path-breaking researchers to help take this project forward. If you’re interested, then join us.

Background art website copy new.jpg