Submission to AHRC on Human Rights and Technology

Human Rights and Technology: Submission to the Australian Human Rights Commission, Australia on Human Rights and Technology Discussion Paper

Claire Benn, Jenny Davis, Seth Lazar, Toni Erskine, Chelle Adamson, with Australian Academy of Science, 3Ai, Michael Barber, Robert Williamson

Policy Submission

This submission is a response to the consultation by the Australian Human Rights Commission on the published discussion paper “Human Rights and Technology”. It is made with the advice and expertise from the AAS Fellowship; AAS National Committee for Data in Science; AAS National Committee for Information Communication Science; the Australian National University’s (ANU) Humanising Machine Intelligence and 3Ai institutes; and Professor Michael Barber AO FAA FTSE, Co-Chair of the Academy’s ARC LASP report on Big Data.

The Discussion Paper put forward by the Human Rights Commission (HRC) outlines the HRC’s preliminary views and proposals in three key areas: (1) Regulation, leadership and good governance amid the rise of new technologies; (2) The use of artificial intelligence (AI) in decision making; and (3) Accessibility of new technologies for people with disability. The discussion paper is a thorough, sensitive and value-aware proposal that takes the issues posed by new technology seriously, makes nuanced distinctions that respect the complex nature of the phenomenon under discussion and does not shy away from suggestions and demands to protect the interests of the citizens of Australia. We particularly encourage the current direction proposed in the discussion paper that looks towards making effective use of existing regulation.

HMI identified 7 areas for further development. First, defining ‘AI-informed decision-making’ that was key to the proposed legislative reform, including what counts as ‘legally or similarly significant effects’, and when AI ‘materially assists in the process of making a decision’ and exploring where the two aspects of this definition come apart. Second, examining the strengths and weakness of the proposals concerning the explainability of AI systems, including whether, when and the kind of explanation that might be demanded. Third, explaining the advantages of formally linking design and assessment, such as ensuring that the production lifecycle is mapped in a single and continuous way to ensure that all stages adhere to a human rights standard.

We also encouraged the Human Rights Commission to clarify the duties and duty-holders targeted by the discussion paper, to reconsider setting up a new regulatory body in place of integrating a human rights approach to AI within existing regulatory bodies, to target practices (such as surveillance) rather than techniques (such as facial recognition) and finally to prioritise building trustworthy systems rather than simply aiming for increased public trust.

Read the submission here.