ASNETs is a neural network architecture that can learn how to solve large planning and sequential decision making problems in a domain, from examples of plans or policies for small problems in that domain.
Read MoreIn this paper, published in Artificial Intelligence, Alban Grastien and co-author address the problem of conformant planning, which consists in finding a sequence of actions in a well-specified environment that achieves a specified goal despite uncertainty on the initial configuration and without using observations.
Read MoreIn March 2020 Seth Lazar presented a paper on machine ethics to an interdisciplinary conference at CMU. His respondent was Professor Jonathan Cohen (Princeton).
Read MoreWhat problem can we have with a sequences of actions if each individual act or omission is itself permissible? We use analogies from the rational choice literature to answer this puzzle, appealing to the existence of global moral norms that apply to sequences of acts.
Read MoreIn a counterexample based approach to conformant planning, choosing the right counterexample can improve performance. We formalise this observation by introducing the notion of “superiority” of a counterexample over another one,that holds whenever the superior counterexample exhibits more tags than the latter. We provide a theoretical explanation that supports the strategy of searching for maximally superior counterexamples, and we show how this strategy can be implemented. The empirical experiments validate our approach.
Read MoreAlban Grastien and Sylvie Thiébaux attended the AI, Ethics and Society conference in New York on 7-8 February in New York City as a side event of the AAAI conference on AI.
Read MoreTo develop morally-sensitive artificial intelligence we have to figure out how to incorporate nonconsequentialist reasoning into mathematical decision theory. This paper, part of a broader project on duty under doubt, explores one specific challenge for this task.
Read MoreSeth Lazar, with Alan Hájek and lead editor Renee Bolinger, co-edited a special issue of leading philosophy of science journal Synthese on 'Norms for Risk'.
Read MoreSeth Lazar presented a public talk on "AI Ethics Without Principles" to audiences from the US government NITRD Agency, the Australian Embassy, and the Human-Centered AI Institute at Stanford.
Read MoreWe argue for the existence of rationally supererogatory actions: actions that go above and beyond the call of rational duty. They exist because of normative conflicts: cases where what is best according to some normative domain is different to what is best according to some other normative domain.
Read MoreThe third instalment of the international conference series Decision Theory and the Future of AI brought together renowned experts of decision theory and AI to discuss the concerns raised by algorithmic decision making.
Read MoreThe Morality and Machine Intelligence conference brought together academic leaders from institutes across the US, UK and Australia, from philosophy, social science and computer science to openly discuss their latest research on the ethics of machine intelligence.
Read More