Building Ethics Into Self-Driving Cars: Some Challenges

Building Ethics Into Self-Driving Cars: Some Challenges

Pamela Robinson

Effective Altruism ANU Talk Series

Self-driving cars raise two main kinds of ethical problems. One is common to all new technologies: that of ensuring that adoption of the technology happens in the most beneficial way, (or doesn’t happen if it wouldn’t be beneficial). The other is specific to autonomous decision-making systems like self-driving cars: that of ensuring that they make good decisions. This second problem was the focus of the talk. Ensuring that self-driving cars make good decisions about important things involves building something like ethics into the cars themselves. By starting with a simple set of easily-operationalizable rules and imagining how we could make them more and more sophisticated in response to their limitations, I described some of the many factors that make this problem hard, as well as what we might do about them. One thing that makes the problem especially hard is that there are many ways of making trade-offs between different morally relevant considerations, and many ways of handling uncertainty and ignorance. Choosing one set of rules for a self-driving car to follow (or choosing one way for a self-driving car to learn a set of rules), is difficult when there are so many similar alternatives to choose from—each of which can have significantly different consequences for society. I argued that optimism is warranted if we have a modest goal: building good enough rules into self-driving cars to make widespread adoption feasible and beneficial. And I discussed how we might aim for this short-term goal without forgetting about the longer-term.