Little is known about how human attention is allocated over the large-scale networks used by most video hosting sites and about the impacts of the recommender systems they use. In this paper, we propose a model that accounts for the network effects for predicting video popularity, and we show it consistently outperforms the baselines.
Read MoreSeth Lazar presented a public talk on "AI Ethics Without Principles" to audiences from the US government NITRD Agency, the Australian Embassy, and the Human-Centered AI Institute at Stanford.
Read MoreIn October 2019 Seth Lazar visited MIT and Carnegie Mellon University, to present a talk on the value of explanations to philosophers and computer scientists.
Read MoreWe argue for the existence of rationally supererogatory actions: actions that go above and beyond the call of rational duty. They exist because of normative conflicts: cases where what is best according to some normative domain is different to what is best according to some other normative domain.
Read MoreThis lead article analyses the legal risks associated with the use of artificial intelligence in the public sector, exploring the epistemic and moral assumptions of central doctrines of public law and evaluating whether they clash with algorithmic design techniques. The article exposes the central legal challenges of automating public power.
Read MoreAnother sold-out public lecture from the time before COVID when we could all meet in the same building. David Danks gave a whistlestop tour through the ground-breaking research he and colleagues at Carnegie Mellon University are doing on making the interface between humans and machines work better for the former
Read MoreThe third instalment of the international conference series Decision Theory and the Future of AI brought together renowned experts of decision theory and AI to discuss the concerns raised by algorithmic decision making.
Read MoreThe Morality and Machine Intelligence conference brought together academic leaders from institutes across the US, UK and Australia, from philosophy, social science and computer science to openly discuss their latest research on the ethics of machine intelligence.
Read MoreThis piece outlines the approach taken by social media companies to the discovery of Chinese-linked coordinated activity on their networks which appeared to be targeted at political activity in Hong Kong. It suggests the removal of such accounts is complicated by social media companies' business models.
Read MoreIn this public lecture to launch the HMI project, Professor Shannon Vallor argued that instead of trying to build humane technology that does moral reasoning for us, we should support humanising technology that enhances our ability to act as moral agents.
Read MoreColin Klein and Mark Alfano began work July 2019 on a $300,000 ARC grant to investigate ‘Trust in a social and digital world’. By using the tools of social epistemology, virtue epistemology, and network science, this project will identify how individuals should distribute their trust when embedded in epistemically hostile environments.
Read MoreWalter Sinnott-Armstrong gave the first HMI public lecture to a packed theatre on June 26, 2019, enthralling the audience with the idea that AI could make us better moral decision-makers. He explained how AI could be used to improve human moral judgments in many areas of ethics, discussing kidney transplants, autonomous vehicles, and autonomous weapons.
Read More