Humanising Machine Intelligence

 
Background art website 3 copy.jpg
 

just by design

Every new technology bears its designers’ stamp. For Machine Intelligence, our values are etched deep in the code.

Machine Intelligence sees the world through the data that we provide and curate. Its choices reflect our priorities. Its unintended consequences voice our indifference. It cannot be morally neutral.

We have a choice: try to design morally defensible machine intelligence, which sees the world fairly and chooses justly; or else build the next industrial revolution on immoral machines.

To design morally defensible machine intelligence, we must understand how our perspectives reflect power and prejudice. We must understand our priorities, and how to represent them in terms a machine can act on. And we must break new ground in machine learning and AI research.

The HMI project unites some of the Australian National University’s world-leading researchers in the social sciences, philosophy, and computer science around these shared goals. It is an ANU Grand Challenge.

Our Project Our People

Background art website 2 final.jpg
 

towards moral machines

In the discovery stage, we will formulate the design problem. What are the social impacts of machine intelligence now? Where is it advancing social justice and collective benefit; where is it undermining them? When it fails, why is it failing? Will better AI solve the problem, or are there some social problems that AI cannot fix? What are the potential consequences for human moral agency of increasing reliance on AI?

Morally defensible machine intelligence faces fundamental challenges. Practical progress must await new foundations. What must be true about morality, for moral machine intelligence to be possible? How can we represent moral reasons in a language computers understand? How ought machines choose under risk and uncertainty? How can we ensure that the development of machine intelligence strengthens, rather than undermining, human moral capacities?

On those foundations, we will build the project’s design phase. Together with partners in industry and government, we will use case studies to show that moral defensible machine intelligence is possible. Our goal: systems that not only reliably select the right options under risk and uncertainty, but do so in a way that can be justified to those most affected, and which enable, rather than supplant, human moral agency.

We will launch in August 2019, after which you’ll be able to read about our findings here

 
 
Background art website 4 copy.jpg
 

launching august

 

Public Launch of the HMI Project at ANU: Should the Future of Intelligent Machines be Humane or Humanising?

Professor Shannon Vallor

Regis and Dianne McKenna Professor of Philosophy at Santa Clara University in Silicon Valley, AI Ethicist/Visiting Researcher at Google

In the coming decades, the spread of commercially viable artificial intelligence is projected to transform virtually every sociotechnical system, from finance and transportation to healthcare and warfare. Less often discussed is the growing impact of AI on human practices of self-cultivation, those critical to the development of intellectual and moral virtues. The art of moral self-cultivation is as old as human history, and is one of the few truly unique capacities of our species. Today this humane art has largely receded from the modern mind, with increasingly devastating consequences on local and planetary scales. Reclaiming it may be essential to averting catastrophe for our species, and many others. How will AI impact this endangered art? What uses of AI risk impeding or denaturing our practices of moral cultivation? What uses of AI could amplify and sustain our moral intelligence? Which is a better goal for ethical AI: machines that are humane? Or machines that are humanising?

Professor Shannon Vallor researches the ethics of emerging technologies, especially AI and robotics. She is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, from Oxford University Press (2016). She received the 2015 World Technology Award in Ethics, and serves on the Board of the non-profit Foundation for Responsible Robotics, is a visiting researcher and AI Ethicist at Google, and consults with other leading AI companies on AI ethics.


Manning Clark Hall 9-8-2019 1800 bit.ly/sv-hmi

Background art website small copy.jpg
 

who we are

HMI is a highly focused research project, with a hand-picked team of leading researchers. Each core member adds something unique; all are committed to working closely together to make substantial progress towards moral machine intelligence.

Philosophy and the social sciences are not a bolt-on for the HMI project, but will drive our research agenda alongside computer science. Though from different disciplines, we share a common expertise in probabilistic decision-making. Our work, though cross-disciplinary, will meet the highest standards of excellence in each component discipline.

 
Top
Background art website 3 copy.jpg