Should the future of intelligent machines be humane or humanising?

Photo by Adam Spence

Should the future of intelligent machines be humane or humanising?

Shannon Vallor

HMI Public Launch, Manning Clarke Hall, 9 August 2019

What should we aim for in humanising machine intelligence? Professor Shannon Vallor, Baillie Gifford Chair of the Ethics of Data and Artificial Intelligence at the University of Edinburgh, launched the HMI project with an engaging public lecture that interrogates the foundations of this research programme.

Vallor highlights the equivocal nature of title “Humanising Machine Intelligence.” Do we intend to build intelligent machines that simulate human moral reasoning and actions, or ones that enhance our existing moral functions? In other words: should machine intelligence be “humane” or “humanising”? Vallor explains this distinction by presenting two possible paths we can take as a society that will inevitably incorporate more and more machine intelligence into our lives. In contrast with much of the existing literature in AI ethics which focuses on building humane technology, Vallor advocates that we instead strive towards humanising technology.

Vallor’s argument centres around the possibility of technologically-enabled “moral deskilling”. This is the idea that, just as new technologies have displaced human labor and relegated skills such as handwriting, and reading paper maps into obsolescence, the same fate my befall moral reasoning as we increasingly automate tasks that require it. The key difference between humane and humanising machine intelligence is whether it exacerbates or reverses this trend.

Vallor illustrates this distinction with the following example.  AI-driven sentiment analysis can allow us to detect when people post hateful social media posts. If our goal is to build humane technology, we might use this to automatically hide or censor such posts (as indeed automated moderation software often does). If instead our goal is to create humanising technology, we might instead use this sentiment analysis to trigger a warning to the user that explains why the message they are about to post might be harmful to others. In the first instance, the technology does the moral work for us, allowing us to benefit from a less toxic social network without having to personally do any moral work. In the latter case, the technology forces us to reflect on our moral responsibilities and helps us to develop ethical maturity. While Vallor discussed some grim possibilities that might arise from a misguided emphasis on humane rather than humanising machine intelligence, her outlook was optimistic overall. There are many ways in which AI can enhance our humanity and make us better, kinder, smarter people, but achieving them requires rethinking the goals of AI ethics. You can read more in Vallor’s forthcoming book, “The AI Mirror: Rebuilding Humanity in an Age of Machine Thinking.”

Read more here