AI: Law, Ethics, Algorithms, and Politics (AI LEAP) 2021

Artist: Janusz Jurek

AI: Law, Ethics, Algorithms, and Politics (AI LEAP) 2021

RSSS Building, ANU, Canberra, 1-3 December 2021

AI LEAP Website

Overview

AI LEAP is a new annual conference that aims to foster intellectual exchanges among disciplinary and trans-disciplinary experts, drawing broadly on computer science, the social sciences, and the humanities, without centering any one perspective at the expense of others.

As ever greater proportions of our online and offline lives are shaped by AI and related technologies, cross-disciplinary collaboration to understand the risks and opportunities that these systems create has become ever more important. Tools that use data and/or AI can generate relations of unaccountable power, can poison public discourse, and can perpetuate and exacerbate social inequalities. They can also enable extraordinary leaps forward in medical research, minimise market inefficiencies, and both delight and surprise. Our challenge is to deepen our understanding of data and AI, so that we can design sociotechnical systems that uphold our values as political communities. To do this, we cannot rely on any one disciplinary approach; we must bring together scholars from fields including computer science, digital media studies, law, philosophy, political science and sociology, as well as many others, to tackle the question of how to understand how data and AI now shape our world, and how to design sociotechnical systems that shape it for the better.

During its inaugural year, AI LEAP will bring together the burgeoning Australasian community in this field (indeed, anyone in our travel bubble in December), and will highlight exciting new research being undertaken by Australasian scholars. It will be an in-person conference (touch wood).

Topics

We welcome contributions on the following (and related) topic areas:

  • Empirical research into the impacts of AI systems.

  • Evaluative research into AI impacts.

  • Evaluative research into the goals at which we should aim when redesigning AI systems.

  • Technical research into the representation, acquisition, and use of ethical knowledge by AI systems.

  • Proposal and/or evaluation of technical methods for realising evaluative goals.

  • Proposal and/or evaluation of sociotechnical methods for realising evaluative goals.

  • Proposal and/or evaluation of legal and regulatory approaches for realising evaluative goals.

Important Dates

  • Submission Deadline: 11:59 AEST October 4, 2021

  • Reviews Due: 11:59 AEST October 29, 2021

  • Notification: 11:59 AEST November 12, 2021

  • Final Version: 11:59 AEST November 23, 2021

  • Conference: December 1-3, 2021

Call for Papers

We are seeking papers on any topic related to AI: Law, Ethics, Algorithms and Politics. We are methodologically and substantively inclusive. The following topic list is intended as a prompt, but is not intended to be exhaustive. See the website here.

Empirical research into the impacts of AI systems.

  • Bringing to light applications of AI with significant but insufficiently recognized impacts.

E.g. detailing new and underexplored uses of AI in government, defence, healthcare, finance, political campaigning, marketing, digital platforms and other areas.

  • Advancing our theoretical understanding of how AI systems are changing societies.

E.g. exploring how data and AI-driven policy-making leads to changes in how governments see citizens (and vice versa); how industry shapes social environments so that they are more susceptible to datafication; how AI systems can react to, produce and reproduce social inequality and prejudice, including racism and misogyny; the social consequences of automation; political economy of big tech.

  • Investigating public or professional resistance to the deployment of AI systems.

Evaluative research into AI impacts.

  • Deepening the moral diagnosis of existing and feasible AI systems.

E.g. theoretical accounts of why surveillance may be resisted or embraced; how it reshapes subjectivity and behavior; the kinds of manipulation it enables; accounts of the nature of discrimination as practiced by AI systems; existential risks posed by the development of AI systems.

  • Evaluating existing and feasible AI systems against existing legal and regulatory regimes.

E.g. assessing the feasibility of ‘black box’ AI systems complying with existing administrative law; data protection implications of existing AI systems; impact of AI systems on antitrust issues.

Evaluative research into the goals at which we should aim when redesigning AI systems.

  • Theoretical work aimed at addressing, understanding, or resolving evaluative uncertainty and disagreement about goals to aim at with AI systems.

E.g. determining how to think about discrimination in the age of AI; how to philosophically conceptualize alignment with human values.

  • Normative theory aiming to map out how AI systems could be used legitimately, and for social benefit.

E.g. re-examining the moral foundations of administrative law to devise standards for AI-assisted institutional decision-making.

Technical research into the representation, acquisition, and use of ethical knowledge by AI systems.

  • How can ethical knowledge be represented as rules and constraints; as utility functions; as stories and scripts; as deep neural networks; etc?

E.g. ethical knowledge is learned by humans from limited amounts of experience and pedagogy; what does this mean for representation?

  • How should key concepts such as fairness and bias be formalized to allow properties of intelligent systems to be evaluated and guaranteed?

E.g. establishing “best practices” for training set curation to prevent or reduce transmission of existing societal bias to a learning system.

Proposal and/or evaluation of technical methods for realising evaluative goals.

  • Developing AI systems for specific application domains that advance valid evaluative goals.

E.g. ‘Mechanism Design for Social Good’ and related areas.

  • Introducing mechanisms for procedural justice into AI systems as deployed in practice.

E.g. methods for making AI systems in practice better suited to democratic governance; design tools for introducing auditability trails into AI systems; explainable AI with a social purpose.

Proposal and/or evaluation of sociotechnical methods for realising evaluative goals.

  • Exploring the culture and practices of AI research and development to counteract structural injustice.

E.g. labor rights and employee activism in the tech sector; alternative socially-oriented methods for AI research and development such as data trusts and public benefit corporations; nature of collective mobilization in digitally distributed environments.

  • Proposing and evaluating methods for responsible and inclusive innovation with active involvement from those affected by new technologies.

E.g. methods for participatory design and responsible innovation practices.

Proposal and/or evaluation of legal and regulatory approaches for realising evaluative goals.

  • Exploring the relative merits of using legal instruments such as antitrust, consumer protection, and data protection to regulate the impacts of AI.

E.g. comparative analysis of data protection regimes; arguments for or against explicit regulation of automated decision-making; ongoing prospects for transnational regulation.

  • Exploring the role of public law in constraining public use of AI and related technologies.

E.g. investigation of how administrative law needs to be revised to accommodate AI (or vice versa).

AI LEAP Best Paper Prize

To bring the best work in the region to the fore, we are inaugurating the AI LEAP Best Paper Prize, to be awarded to the paper that, in the judgement of the prize committee, most substantially advances our collective understanding of AI LEAP themes.
The prize committee will judge against three criteria: excellence, significance, and the ability to communicate the significance of the project to an interdisciplinary audience.
The Best Paper will be spotlighted in its own hour-long session during the conference, and will receive a prize of AUD2,500.
At least two Runners Up papers will also be jointly spotlighted during the conference, and will receive a prize of AUD1,000.
The shortlist of Highly Commended papers will also be announced.
Eligibility: Our intention is to target the prize at early career scholars, within 10 years of award of PhD, taking into account career interruptions. PhD, masters, and undergraduate students are also eligible. All disciplinary approaches are eligible.

Submission Instructions

ubmitted papers should address these or related topics in ways that make a substantive contribution to knowledge in one or more fields. A paper should clearly establish its research contribution, its relevance, and its relation to prior research.

Submitted papers should be either 12 pages or less in Arxiv preprint format, or 10,000 words including footnotes (in any format).

Optionally, authors can upload supplementary materials (e.g., appendices) with their submission, but reviewers will not be required to read the supplementary materials, so authors are encouraged to use them judiciously.

At least one author of each accepted paper is required to register for, be present in person at, and present the work at the conference. Touch wood.

All submissions must be submitted through the EasyChair link on the conference website: https://easychair.org/conferences/?conf=aileap2021

Review will be double-blind, so authors should remove identifying information from their papers. However, to assist selecting reviewers, authors should report the paper’s primary disciplines on the first page.

IMPORTANT NOTICE: AI LEAP is not at present an archival conference. We may offer selected authors the opportunity to publish in a special issue of a leading journal, but submission does not imply any publication obligation, meaning that any submission can be subsequently submitted to another venue.

Papers submitted to AI LEAP may not be published or accepted for publication at an archival conference or journal prior to submission to AI LEAP.

Recognizing that a multiplicity of perspectives leads to stronger science, the conference organizers actively welcome and encourage people with differing identities, expertise, backgrounds, beliefs, or experiences to participate.