Mathematical decisions and non-causal elements of explainable AI

Mathematical decisions and non-causal elements of explainable AI

Atoosa Kasirzadeh

NeurIPS 2019, Workshop on Human-centric Machine Learning, Vancouver, Canada (13 December 2019)

The social implications of algorithmic decision-making in sensitive contexts have generated lively debates among multiple stakeholders, such as moral and political philosophers, computer scientists, and the public. Yet, the lack of a common language and a conceptual framework for an appropriate bridging of the moral, technical, and political aspects of the debate prevents the discussion to be as effective as it can be. Social scientists and psychologists are contributing to this debate by gathering a wealth of empirical data, yet a philosophical analysis of the social implications of algorithmic decision-making remains comparatively impoverished. In attempting to address this lacuna, this paper argues that a hierarchy of different types of explanations for why and how an algorithmic decision outcome is achieved can establish the relevant connection between the moral and technical aspects of algorithmic decision-making. In particular, I offer a multi-faceted conceptual framework for the explanations and the interpretations of algorithmic decisions, and I claim that this framework can lay the groundwork for a focused discussion among multiple stakeholders about the social implications of algorithmic decision-making, as well as AI governance and ethics more generally.

Read here