Seth Lazar was invited to be on the Academic Board's Data Governance Working Group, with the remit to consider the university's principles and policies around data protection, in particular in relation to the data generated by members of the university as they use its services (digital and otherwise).
Read MoreThis book offers a conceptual update of affordance theory that introduces the mechanisms and conditions framework, providing a vocabulary and critical perspective for the analysis and design of sociotechnical systems.
Read MoreThis paper is a collaboration between HMI, IAG and Gradient, and reflects our broader concern that new methods that use machine learning to influence risk predictions to determine insurance premiums won't be able to distinguish between risks the costs of which people should bear themselves, and those that should be redistributed across the broader population, and might also involve using data points that it is intrinsically wrong to use for this purpose.
Read MoreThrough an exploration of content moderation on the social media site Reddit, the authors argue for systematic standards of information governance and the legal treatment of social media companies as media producers.
Read MoreHumanising Machine Intelligence convened a virtual roundtable consultation with Human Rights Commissioner Edward Santow to discuss the Human Rights and Technology Project on 28 May 2020. HMI brought a group of senior experts and decision makers together across academia, industry and government to support the important work of the Commission.
Read MoreI discussed ways in which seemingly value neutral decisions that technology workers make can have major moral implications, and how to think critically and proactively about them.
Read MoreProposed legislation will open the way to sharing the vast quantities of data held by the Australian government, without needing our consent. While it promises to enable the smooth service delivery citizen-consumers have come to expect, it also challenges traditional roles of privacy, consent and trust in the public sphere.
Read MoreIn this submission, Dr Will Bateman (with Dr Julia Powles) responded to the Australian Human Rights Commission’s Technology and Human Rights Discussion Paper. The submission focused on three areas of reform: the use of self-regulation and cost-benefit analyses in the regulation of human rights; the remedial force of human rights law; and the powers given to any ‘AI Safety Commissioner’.
Read MoreSeth Lazar and Colin Klein question the value of basing design decisions for autonomous vehicles on massive online gamified surveys. Sometimes the size of big data can't make up for what it omits.
Read MoreWe propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.
Read MoreThe US Defense Innovation Board recently approved a document proposing principles governing the deployment of AI within the Department of Defense. HMI project leader Seth Lazar was invited to an expert panel discussing candidate principles, and made a submission to the Board.
Read MoreTogether with the Australian Academy of Science, HMI team members wrote a submission responding to the Data61 discussion paper: “Artificial Intelligence: Australia’s Ethics Framework”. Read our key recommendations here.
Read More