ALGORITHMIC ETHICS OVERVIEW
Our Algorithmic Ethics subtheme takes as its starting point that if we're going to design AI systems that reflect our values as democratic societies, we have to figure out how to either train AI systems to learn normative goals and constraints, or else encode those goals and constraints into those systems. Either approach presupposes that it is possible to represent complex normative theories in terms that are computationally accessible and infer optimal decisions that comply with these normative theories in a computationally tractable manner. It is an open question whether this is possible; answering this question requires addressing foundational topics in decision theory and normative ethics, as well as in AI. We have therefore worked on translating formal representations of moral theories into computational languages, and assessing their degree of complexity, as well as the other requirements for operationalising them in realistic contexts (e.g. which data an agent would need in order to choose appropriately). We have considered the question of whether AI systems can act ethically, or whether we should aim primarily for them to be aligned with our chosen values, as well as explored the question of just which values they should be aligned with. We have pursued advanced work in sequential moral decision theory, which has brought philosophical approaches to decision theory closer together with the approach adopted in AI planning. In tandem, we have worked on developing foundations for a theory of the ethics of quantum computing. We are also pursuing foundational work in AI, for example on integrating symbolic and learning approaches to AI in order to realise value-aligned AI systems that are more trustworthy and easier to explain. And we have sought to build on these insights in the design of robots and other autonomous systems, developing standards for evaluating the safety of autonomous vehicles in collaboration with the Assuring Autonomy International Programme in the UK, showing how different moral theories can be represented by AVs, and developing new approaches to the design of strategically compassionate robots.