This paper provides the first formalisation and empirical demonstration of a particular safety concern in reinforcement learning (RL)-based news and social media recommendation algorithms. This safety concern is what we call "user tampering" -- a phenomenon whereby an RL-based recommender system may manipulate a media user's opinions, preferences and beliefs via its recommendations as part of a policy to increase long-term user engagement.
Read MoreComputers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr’s famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.
Read MoreAtoosa Kasirzadeh presented ‘Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence’ at the ACM Conference Proceedings on Fairness, Accountability, Transparency (FAccT) 2021. Click through for more information.
Read MoreJoin us at the 4th AAAI/ACM Conference on AI, Ethics, and Society - 19-21 May 2021!
Read MoreShould decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.
Read MoreThe use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender.
Read MoreOn December 12 2020, Atoosa Kasirzadeh and Andrew Smart will present their paper ‘A critique of the use of counterfactuals in ethical machine learning’ at the Virtual NeurIPS 2020 Workshop on Algorithmic Fairness through the Lens of Causality and Interpretability.
Read MoreAtoosa Kasirzadeh was recently chosen for a competitive 5-month research position with DeepMind for Google's Ethical AI team in London (UK) during 2021 to work on the ethics of artificial intelligence.
Read MoreAtoosa Kasirzadeh, Will Bateman and Tiberio Caetano joined a panel of leading interdisciplinary experts to explore the complex legal and ethical challenges AI and automated decision-making present to industry, government and the legal profession. Click through for more information.
Read MoreAtoosa presented her paper ‘A philosophical theory of AI explanations’ to an academic audience at UC Berkeley. Click through for more information.
Read MoreAtoosa Kasirzadeh gave a talk on ‘The Use and Misuse of Counterfactuals in Fair Machine Learning’ at the Virtual Workshop on the Philosophy of Medical AI, hosted by University of Tübingen in October 2020.
Read More