Mathematical and Causal Faces of Explainable AI
Mathematical and Causal Faces of Explainable AI
Atoosa Kasirzadeh
Research Presentation
Recent conceptual discussion on the nature of the explainability of Artificial Intelligence (AI) has largely been limited to causal investigations. This paper identifies some shortcomings of this approach to help strengthen the debate on this subject. Building on recent philosophical work on the nature of explanations, I demonstrate the significance of two non-causal explanatory elements: (1) mathematical structures that are the grounds for capturing the decision-making situation and (2) statistical and optimality facts in terms of which the algorithm is designed and implemented. I argue that these elements feature directly in important aspects of AI explainability and interpretability. I then propose a hierarchical framework that acknowledges the existence of various types of explanation, each of which reveals an aspect of decision making, and answers to a different kind of why-question. The usefulness of this framework will be illustrated by bringing it to bear on some salient questions about AI and society.
This paper has been presented at Halıcıoğlu Data Science Institute and Department of Philosophy (University of California San Diego), Department of Philosophy (Stanford University and University of Washington), Department of Logic and Philosophy of Science (University of California, Irvine).