If we designed AI systems that were morally perfect in a vacuum, but didn't take into account the predictable way people react when interacting and using those systems, then we would end up with very bad AI systems. We need to take our limitations and biases into account when designing AI systems, but also think about how working with data and AI will change us.
Read MoreThis paper provides the first formalisation and empirical demonstration of a particular safety concern in reinforcement learning (RL)-based news and social media recommendation algorithms. This safety concern is what we call "user tampering" -- a phenomenon whereby an RL-based recommender system may manipulate a media user's opinions, preferences and beliefs via its recommendations as part of a policy to increase long-term user engagement.
Read MorePlanning under partial obervability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for problems with continuous observation spaces remains challenging. Most on-line solvers rely on discretising the observation space or artificially limiting the number of observations that are considered during planning to compute tractable policies. In this paper we propose a new on-line POMDP solver, called Lazy Belief Extraction for Continuous POMDPs (LABECOP), that combines methods from Monte-Carlo-Tree-Search and particle filtering to construct a policy reprentation which doesn't require discretised observation spaces and avoids limiting the number of observations considered during planning. Experiments on three different problems involving continuous observation spaces indicate that LABECOP performs similar or better than state-of-the-art POMDP solvers.
Read MoreJoin us at the 4th AAAI/ACM Conference on AI, Ethics, and Society - 19-21 May 2021!
Read MoreShould decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.
Read MoreThis paper presents the notion of Virtue Signalling, a second order normative constraint that asks agents to perform actions that are unambiguously permissible. We discuss different definitions for Virtue Signalling, and show how this type of constraints can affect the behaviour of a robotic agent.
Read MoreThe User Experience Professional Association (UXPA) hosted a two-part book club webinar for Jenny Davis' book "How Artifacts Afford: The Power and Politics of Everyday Things."
Read MoreInvited guest lecture for the inaugural 3Ai Institute's graduate cohort. The lecture was rooted in ideas from Jenny Davis’s recent book "How Artifacts Afford: The Power and Politics of Everyday Things".
Read MoreAn interview between Vikram Singh and Jenny Davis about her book "How Artifacts Afford: The Power and Politics of Everyday Things" (MIT Press 2020). The interview is published in the DisAssemble Newsletter, a publication for design theorists and practitioners. Read here or click through for more information.
Read More