Legal Audit of AI in the Public Sector - HMI Technology Policy Paper

Image Credit: Kirill Maksimchuck

Legal Audit of AI in the Public Sector 

HMI Technology Policy Paper: 1/2021, Will Bateman (Australian National University, Law School) & Julia Powles (University of Western Australia, Law School), 7 May 2021.

Executive Summary

1. The collection of technologies which are captured under the umbrella phrase ‘artificial intelligence’ (AI) trigger a re-think of the law applying to governments because they fundamentally change the power balance between public officials and citizens.

2. Automation, machine learning, data archiving/networking and mass surveillance technologies give the entities which control their use enormous advantages over the people who are subject to them.

3. Existing legal (and constitutional) frameworks applying to government are built on a ‘human-centric assumption’: that the people who exercise public power have the same cognitive, physical and social capacities as the citizens they govern. That assumption no longer holds when governments apply AI technologies that are more powerful, yet potentially more opaque and narrow-minded than human decision-makers.

4. Generally-speaking, legal rules that currently apply to government use of AI: a. lag behind technical advancements in AI; b. fail to explicitly regulate the potential harms of AI; and c. use ‘sob’ rather than ‘hard’ law. 5. More detailed conclusions can be reached about the law governing public sector AI by analysing case studies that show the way exis1ng legal rules operate in concrete contexts.

6. To undertake that task, we use basic requirements of liberal democratic government as criteria to measure the appropriateness of existing legal frameworks applying to government use of AI (Audit Criteria):

a. Knowledge of the essential features of how AI technologies use information and reach outcomes in a particular context;

b. Assent to the use of AI through specific authorising legislation;

c. Personhood, or treating people as autonomous individuals, as the basic standard for legitimate government behaviour;

d. Protection of basic civil rights;

e. Contestability before an independent judicial body; and

f. Remedial action for wrongful use of AI. 

7. We use those criteria to audit case studies which show how the law works ‘in application’ rather than ‘in theory’:

a. Automation of welfare state functions in Australia via the Online Compliance Intervention (OCI or robodebt) system;

b. Data-driven machine learning technology as part of the criminal jus1ce system in the US via the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system;

c. Data archiving/networking in the UK National Health Service, which led to the sharing of personal health information by public health authorities with Google (NHS/Deep Mind); and

d. Mass surveillance in UK policing via the use of live facial recognition technology (NeoFace Watch).

8. In each case study we assign a score to the legal frameworks governing AI by reference to the Audit Criteria, and reflect on how that score could be impacted under different legal regimes applying to AI.

9. We conclude the Legal Audit by presenting an assessment of the success of existing law in governing the use of AI by government.  

The authors acknowledge the generous support of the Minderoo Foundation. 

Full paper can be accessed here.