Moral Disagreement and Artificial Intelligence
Moral Disagreement and Artificial Intelligence
Pamela Robinson presented ‘Moral Disagreement and Artificial Intelligence’ at AIES'21.
Abstract: Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without consensus about the relevant moral facts. I argued that what makes moral disagreement especially challenging is that there are two very different ways of handling it. Political solutions aim for a fair compromise, while epistemic solutions aim at moral truth. Proposals for both kinds of solutions can be found in the AI ethics and value alignment literature, but hardly anything has been said to justify choosing one over the other. So I tried to determine what it would take to justify one over the other.