The Value of Explanations
The Value of Explanations
Computer scientists, lawyers, and STS scholars have devoted much attention to the problem of inscrutability in AI systems (especially those relying on machine learning). Computer scientists have developed novel methods for explaining the outputs of AI systems; lawyers have debated whether European law provides for a 'right to explanation'; STS scholars have described the prevalence of unexplainable AI systems, and proposed institutional models for how to respond to them. Moral and political philosophers are late to the party. But there is work for us to do: while there is a widespread sense of the 'intuitive appeal' of explainable systems, relatively little time has been spent figuring out just what grounds that intuitive appeal. A rich and systematic account of the role of explanations in our moral and political lives may help us better decide what counts as a good explanation, and how to trade off the value of explanations against other things that matter. In this paper, I make a start on that project, using the tools of moral and political philosophy to give an account of the value of explanations.
Seth presented this talk to audiences of philosophers and computer scientists at MIT and Carnegie Mellon Universities, in October 2019.