In a counterexample based approach to conformant planning, choosing the right counterexample can improve performance. We formalise this observation by introducing the notion of “superiority” of a counterexample over another one,that holds whenever the superior counterexample exhibits more tags than the latter. We provide a theoretical explanation that supports the strategy of searching for maximally superior counterexamples, and we show how this strategy can be implemented. The empirical experiments validate our approach.
Read MoreThrough in-depth interviews with AI practitioners in Australia, this paper examines perceptions of accountability and responsibility among those who make autonomous systems. We find that AI practitioners envision themselves as mediating technicians, enacting others’ high-level plans and then relinquishing control of the products they produce. Findings highlight “ethics” in AI as a challenge that distributes among complex webs of human and mechanized subjects.
Read MoreEpidemic models and self-exciting processes are two types of models used to describe diffusion phenomena online and offline. These models were originally developed in different scientific communities, and their commonalities are under-explored. This work establishes, for the first time, a general connection between the two model classes via three new mathematical components.
Read MoreThe phenomenon of virtual child pornography requires us to radically reconceive our understanding of three core concepts: (i) what it means to be an image; (ii) what it means to be an image of a child; and (iii) what it means to be a sexual image of a child.
Read MoreI offer a multi-faceted conceptual framework for the explanation and the interpretations of algorithmic decisions, and I claim that this framework can lay the groundwork for a focused discussion among multiple stakeholders about the social implications of algorithmic decision-making, as well as AI governance and ethics more generally.
Read MoreTo develop morally-sensitive artificial intelligence we have to figure out how to incorporate nonconsequentialist reasoning into mathematical decision theory. This paper, part of a broader project on duty under doubt, explores one specific challenge for this task.
Read MoreSeth Lazar, with Alan Hájek and lead editor Renee Bolinger, co-edited a special issue of leading philosophy of science journal Synthese on 'Norms for Risk'.
Read MorePlaytest demonstrates that that when our fantasies feel real, and have the power to hurt, they are no longer just a game. Virtual reality can build a bridge between what seems real and what is real, and this means its power to scare us silly is not just novel: it’s revolutionary.
Read MoreLittle is known about how human attention is allocated over the large-scale networks used by most video hosting sites and about the impacts of the recommender systems they use. In this paper, we propose a model that accounts for the network effects for predicting video popularity, and we show it consistently outperforms the baselines.
Read MoreWe argue for the existence of rationally supererogatory actions: actions that go above and beyond the call of rational duty. They exist because of normative conflicts: cases where what is best according to some normative domain is different to what is best according to some other normative domain.
Read MoreThis lead article analyses the legal risks associated with the use of artificial intelligence in the public sector, exploring the epistemic and moral assumptions of central doctrines of public law and evaluating whether they clash with algorithmic design techniques. The article exposes the central legal challenges of automating public power.
Read MoreUnderstanding causation is one of the crucial frontiers of discovery in artificial intelligence, where we increasingly depend on machine learning models that inadequately represent causal relations. Philosophical work analysing the nature of causation lays crucial foundations both for advancing AI itself, and for the many deployments of causal reasoning necessary to develop democratically legitimate AI.
Read More