Jenny Davis, Apryl Williams, and Michael Yang displace "fair" machine learning with an intersectional reparative approach in this article published by Big Data & Society.
Read MoreClaire Benn and Seth Lazar ask what is wrong with online behavioural advertising and recommender systems, in this paper published in the Canadian Journal of Philosophy.
Read MoreHMI CI Toni Erskine is academic lead for a major collaboration on 'AI for Social Good', uniting the UN Economic and Social Council for the Asia-Pacific (UN ESCAP), Google, and the Association of Pacific Rim Universities. Toni, in collaboration with HMI staff, will assist in building out the multi-stakeholder network and policy insight briefs developed from the AI for Social Good Project and Summit.
Read MoreClaire Benn is working with Kalervo Gulson from the University of Sydney and the Gradient Institute on the co-design project ‘UK Exam Algorithm Controversy – Co-designing an Interactive Interface’.
Read MoreCall for Papers now open! ACM FAccT solicits work from a wide variety of disciplines, including computer science, statistics, law, social sciences, the humanities, and policy, and multidisciplinary scholarships on fairness, accountability and transparency in computational systems (broadly construed). We welcome contributions that consider dimensions beyond individual decisions, including equity and justice in systems, policy, and human rights. Click through for more information.
Read MoreWorld Economic Forum's Quantum Computing Governance Principles programme brings together a global multi-stakeholder community of experts from across public sector, private sector, academia and civil society to formulate principles and create a broader ethical framework for responsible and purpose-driven design and adoption of quantum computing technologies to drive positive outcomes for society.
Read MoreIn this paper, we study two unbiased estimators for the Fisher information matrix (FIM) in the context of deep learning. In particular, we derive closed form expressions for the estimator variance. We bound their variances, analyze the impact of deep neural network structures, and discuss our results in the context of deep learning.
Read MoreThis paper provides the first formalisation and empirical demonstration of a particular safety concern in reinforcement learning (RL)-based news and social media recommendation algorithms. This safety concern is what we call "user tampering" -- a phenomenon whereby an RL-based recommender system may manipulate a media user's opinions, preferences and beliefs via its recommendations as part of a policy to increase long-term user engagement.
Read MoreCongratulations to Hanna Kurniawati and Sylvie Thiebaux on receiving ARC funding for their project Integrated Planning for Uncertainty-Centric Pilot Assistance Systems!
Read MoreAssoc Prof Hanna Kurniawati has received an ARC grant!
Read MoreProfessor Seth Lazar wins $1m Australian Research Council Future Fellowship Award to study the political philosophy of AI.
Read More