updates from HMI

At this time we’re updating this page sparingly—when we launch, and have a project and operations manager in place, there will be a lot to report! It’s been a busy six months since our funding was initially agreed, so busy in fact that it’s not possible to keep this up to date while doing all the things that need doing.

You'll soon be able to read about

The awesome generative art used to create this website.

Funding news.

First HMI Public Lecture: Walter Sinnott-Armstrong (Duke)

Second HMI Public Lecture/Launch Event: Shannon Vallor (Santa Clara)

Workshop on AI Politics and Security

Workshop on AI and Ethics

Workshop on AI and Decision Theory

Workshop on AI and Cognitive Science

 
Background art website 3 copy.jpg
 

generative art

The artwork used throughout this site comes from the Behance portfolios of artists Jon Noorlander and Janusz Jurek. Both artists produce 3d digital sculptures in part using generative design—i.e. algorithms that shape their creations. Noorlander does the abstract shapes; Jurek the bird and hands.

We then used photoshop’s ‘content aware fill’ function, which also uses algorithms, in this case ones that draw directly on machine learning, to spread out the background pattern to create enough space for the site design.

Don’t be fooled—even photoshop raises a lot of ethical issues! But the interplay between artist and algorithm in generative art is an interesting analogue for the same interplay that takes place between people and programs in morally loaded choices.

Background art website copy new.jpg

funding news

As of December 2018, the HMI project is an officially funded ANU Grand Challenge program. Total funding up to AUD8m over 5 years.

We will be launching in August 2019: watch this space.

 
Background art website 4 copy.jpg
 

walter sinnott-armstrong lecture

How Artificial Intelligence can Improve Human Moral Judgments

Professor Walter Sinnott-Armstrong (joint work with Joshua August Skorburg)

Abstract: Ethicists usually appeal to their own intuitions with little evidence that their intuitions are reliable or shared by others. Unfortunately, our human moral intuitions are often mistaken when we forget relevant facts, become confused by a multitude of complex facts, or are misled by framing, emotion, or bias. Fortunately, these sources of error can be avoided by properly programmed artificial intelligence. With enough data and machine learning, AI can predict which moral judgments human individuals and groups would make if they were not misled by their human limitations. We will discuss how our group is building an AI to accomplish this goal for kidney exchanges and how this AI could be used to improve human moral judgments in many other areas of ethics, including automated vehicles and weapons.


Biography: Walter Sinnott-Armstrong is Chauncey Stillman Professor of Practical Ethics at Duke University in the Philosophy Department, the Kenan Institute for Ethics, the Duke Institute for Brain Science, and the Law School. He publishes widely in ethics, moral psychology and neuroscience, philosophy of law, epistemology, philosophy of religion, and argument analysis.

Kambri Cultural Centre Cinema - 26-6-2019 1730-1930 - www.bit.ly/wsa-anu

This is the first in a series of public lectures convened by HMI.

Background art website 2 final.jpg