The Centre for Artificial Intelligence and Digital Ethics (CAIDE) facilitates cross-disciplinary research, teaching, and leadership on the ethical, regulatory, and legal issues relating to Artificial Intelligence (AI) and digital technologies at the University of Melbourne. Our research seeks to explore the impact, deployment, and governance of this emerging technology across society. Our approach is to combine legal, ethical, and social perspectives with technological expertise to examine the issues of these emerging technologies in a holistic manner. We examine issues of fairness, privacy, accountability, and transparency in this emerging technology to further our understanding, but also to guide the development and appropriate policy settings for effective use across society.
Read MoreThe ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S Centre) is a new, cross-disciplinary, national research centre, which aims to create the knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making. The Centre combines social and technological disciplines in an international industry, research and civil society network that brings together experts from Australia, Europe, Asia and America. It will formulate world-leading policy and practice and inform public debate with the aim to reduce risks and improved outcomes in the priority domains of news and media, transport, social services and health.
Read MoreThe HMI project partners with the Actuaries Institute co-authoring publications.
Read MoreThis project brings together the expertise of the ACT Education Directorate, the Gradient Institute and the HMI team. The data gathered by the Education Directorate will be used in conjunction with causal analysis to better understand the factors that influence the educational outcomes of students.
Read MoreAs humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using AI to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will ‘moral automation’ diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral ‘ups killing’? Our project will use philosophy and social psychology to answer these questions.
Read MoreA not-for-profit research institute created in 2018, Gradient is made up of world-class machine learning researchers. Their vision, like ours, is to progress the research, design, development and adoption of ethical AI systems.
Read MoreEstablished in 2016, the Leverhulme CFI at Cambridge is one of the leading interdisciplinary research centres working on the future of AI.
We have worked closely with them since our first days; their Academic Director Huw Price is on our Advisory Board; we co-sponsored three workshops in 2019 (on: AI, Politics and Security; Decision Theory and AI; and on Kinds of Intelligence).
Read More