This article, in US magazine Barron's, explores how to think about the privacy risks of app-based contact-tracing in the age of big data, arguing that even if tech companies choose wisely and justly, the 'laws' of their operating systems cannot be legitimate. Democratic institutions are the only means we've discovered to legitimate the use of power in complex social systems.
Read MoreThe Australian Robotics Network are leading a series of workshops, being held across the country to progress the second edition of the Robotics Roadmap for Australia covering areas of national significance to Australia; resources, manufacturing, healthcare, services, defence, infrastructure, agriculture/environment, space and transport/mobility. This webinar lay the foundations of a trust and safety robotics network of researchers, assurance industries, government and organisations in Australia.
Read MoreIn a paper published in Plos One, Colin Klein and co-authors shed light on the online world of conspiracy theorists, by studying a large set of user comments. Their key findings were that people who eventually engage with conspiracy forums differ from those who don’t in both where and what they post. The patterns of difference suggest they actively seek out sympathetic communities, rather than passively stumbling into problematic beliefs.
Read MoreIn this talk, I introduce a philosophically-informed framework for the varieties of explanations used for building transparent AI decisions. This paper has been presented at Halıcıoğlu Data Science Institute and Department of Philosophy (University of California San Diego), Department of Philosophy (Stanford University and University of Washington), Department of Logic and Philosophy of Science (University of California, Irvine)
Read MoreSeth Lazar and Colin Klein question the value of basing design decisions for autonomous vehicles on massive online gamified surveys. Sometimes the size of big data can't make up for what it omits.
Read MoreIn this article, co-authored with epidemiologist Meru Sheel, Seth Lazar questions whether tech companies or democratically-elected governments should decide how to weigh privacy against public health, when fundamental rights are not at stake.
Read MoreClaire Benn and Seth Lazar recorded an interview with Rashna Farrukh for the Philosopher’s Zone podcast on Radio National. The theme: moral skill and artificial intelligence. Does the automation of moral labour threaten to diminish our capacity for moral judgment, much as automation in other areas has negatively impacted human skill?
Read MoreWe propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.
Read MoreMost of the art on this website is drawn from digital artists on the Adobe Behance platform (all CC 4.0 Licence). The main background images are all from the incredibly talented Dario Veruari. We chose the images because they reflect our key themes—not just in what they represent, but in how they are made.
Read MoreLaunched on ANZAC Day weekend, the COVIDSafe app is one of the fastest-downloaded apps in Australian history, but has also been the source of much controversy. A month in, we now know more about the app itself, the legislation supporting it, and the alternatives available around the world. It's possible to make an informed assessment of where we are, and where we need to go from here.
Read MoreThe US Defense Innovation Board recently approved a document proposing principles governing the deployment of AI within the Department of Defense. HMI project leader Seth Lazar was invited to an expert panel discussing candidate principles, and made a submission to the Board.
Read MoreTogether with the Australian Academy of Science, HMI team members wrote a submission responding to the Data61 discussion paper: “Artificial Intelligence: Australia’s Ethics Framework”. Read our key recommendations here.
Read More