Posts tagged Design
Should I Use That Rating Factor? A Philosophical Approach to an Old Problem

This paper is a collaboration between HMI, IAG and Gradient, and reflects our broader concern that new methods that use machine learning to influence risk predictions to determine insurance premiums won't be able to distinguish between risks the costs of which people should bear themselves, and those that should be redistributed across the broader population, and might also involve using data points that it is intrinsically wrong to use for this purpose.

Read More
Human Rights Commission Roundtable Discussion

Humanising Machine Intelligence convened a virtual roundtable consultation with Human Rights Commissioner Edward Santow to discuss the Human Rights and Technology Project on 28 May 2020. HMI brought a group of senior experts and decision makers together across academia, industry and government to support the important work of the Commission.

Read More
Response to the Australian Human Rights Commission Discussion Paper: Technology and Human Rights

In this submission, Dr Will Bateman (with Dr Julia Powles) responded to the Australian Human Rights Commission’s Technology and Human Rights Discussion Paper. The submission focused on three areas of reform: the use of self-regulation and cost-benefit analyses in the regulation of human rights; the remedial force of human rights law; and the powers given to any ‘AI Safety Commissioner’.

Read More
Virtue Signalling: Reassuring Observers of Machine Behaviour

We propose a constraint on machine behaviour: that partially observed machine systems ought to reassure observers that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is forbidden.

Read More