This article, in US magazine Barron's, explores how to think about the privacy risks of app-based contact-tracing in the age of big data, arguing that even if tech companies choose wisely and justly, the 'laws' of their operating systems cannot be legitimate. Democratic institutions are the only means we've discovered to legitimate the use of power in complex social systems.
Read MoreIn this talk, I introduce a philosophically-informed framework for the varieties of explanations used for building transparent AI decisions. This paper has been presented at Halıcıoğlu Data Science Institute and Department of Philosophy (University of California San Diego), Department of Philosophy (Stanford University and University of Washington), Department of Logic and Philosophy of Science (University of California, Irvine)
Read MoreIn this article, co-authored with epidemiologist Meru Sheel, Seth Lazar questions whether tech companies or democratically-elected governments should decide how to weigh privacy against public health, when fundamental rights are not at stake.
Read MoreThe US Defense Innovation Board recently approved a document proposing principles governing the deployment of AI within the Department of Defense. HMI project leader Seth Lazar was invited to an expert panel discussing candidate principles, and made a submission to the Board.
Read MoreTogether with the Australian Academy of Science, HMI team members wrote a submission responding to the Data61 discussion paper: “Artificial Intelligence: Australia’s Ethics Framework”. Read our key recommendations here.
Read MoreThis chapter explores the alignment of the EU data protection and consumer protection policy agendas through a discussion of the reference to the Unfair Contract Terms Directive in Recital 42 of the General Data Protection Regulation.
Read MoreIn the aggregate, advances in data analytics can now yield unexpected and highly beneficial insights into human behaviour, which the government can harness in the interests of the public. But those advances pose significant risks of harming the very people they are intended to benefit. Read more in our submission to the National Data Sharing Commission’s discussion paper on Data Sharing and Release.
Read MoreIn a joint submission, HMI identified 7 areas for further development in the Human Rights and Technology discussion paper proposed by the Australian Human Rights Commission. The main three concerned: defining ‘AI-informed decision-making’; the demand for explanations; and the absence of a formally link between design and assessment.
Read MoreProfessor Toni Erskine, HMI Discovery Lead, presented at the workshop on 'Military Applications of AI, International Security, and Arms Control', hosted by the United Nations Institute for Disarmament Research, convened by David Danks (Carnegie Mellon University), Paul Meyer (Simon Fraser University), and Giaocomo Paoli (UNIDIR). The workshop was held on the 30th and 31st of January 2020 in Santa Monica, California.
Read MoreThis impact-driven project funded by the Minderoo Foundation will produce concrete solutions (including model legislation) to regulate the use of AI by government agencies and public officials. It aims to ensure that Australia is an ‘action-leader’ in the race to ensure that AI is democratically and constitutionally legitimate.
Read MoreThe phenomenon of virtual child pornography requires us to radically reconceive our understanding of three core concepts: (i) what it means to be an image; (ii) what it means to be an image of a child; and (iii) what it means to be a sexual image of a child.
Read MoreIn Kate Crawford’s talk ‘AI and Power: From Bias to Justice’, she called for us to move away from obsessing about biases and instead find paths towards justice and enforce limits on the centralised powers that dominate the majority of technology.
Read More