making new knowledge

Humanising machine intelligence is impossible without new knowledge. There are substantial theoretical challenges to overcome, and unanswered questions to resolve.

To design more ethical machine intelligence we must make substantial advances in fundamental AI and machine learning research. World-leading expertise in these fields is essential.

But this is not enough on its own. Making machines that realise social justice and collective benefit (rather than undermining them) requires understanding the social impacts of technological change. Social science is necessary too.

And we can’t deliver moral machines unless we understand morality. Philosophers have studied ethics—the science of right and wrong—for (at least) 2,500 years. Aiming to devise ethical AI without drawing on these millennia of research will lead at best to reinventing the wheel, at worst to predictable and serious error.

The key to progress is collaborative research between world-leading experts in all of these fields. That’s what the HMI project will deliver.

Background art website 2 final.jpg

the substance of research

Collaborative work is built on actual physical collaboration. We’ll hold a weekly meeting—seminar, workshop, brown-bag, strategy sessions; whatever we need. And we’ll hold lots of workshops and public lectures—opportunities for people from across Australia to join in shaping the next generation of ethical machine intelligence.

In 2019 we already have plans for major workshops on machine intelligence +

…politics and security

…ethics

…decision theory

We are also planning to hold regular public lectures, starting with NYU/AINow’s Kate Crawford and Duke’s Walter Sinnott-Armstrong.

 
Background art website 3 copy.jpg