XClose

UCL Faculty of Laws

Home
Menu

Regulatory Models for Algorithmic Assessment: Robust Delegation or Kicking The Can?

25 April 2024, 6:00 pm–7:30 pm

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

Margot Kaminski, Michael Veale and Jennifer Cobbe will compare and contrast different AI regulatory regimes

Event Information

Open to

All

Organiser

UCL Laws Events

Location

UCL Faculty of Laws
Bentham House, Endsleigh Gardens
London
WC1H 0EG

About the talk

Recent years have seen a surge in regulation targeting algorithmic systems, including online platforms (Online Safety Act [UK], Digital Services Act [EU]), artificial intelligence (AI Act [EU], AI Executive Order [US]), and the application and extension of existing frameworks, such as data protection, to algorithmic challenges (UK and EU GDPR, California Consumer Privacy Act and Draft Automated Decisionmaking Technology Regulations [USA]). Much of the time, these instruments require regulated actors to undertake or outsource some form of assessment, such as a risk assessment, impact assessment or conformity assessment, to ensure the systems being deployed have desired characteristics. On first glance, all these assessments look like the same regulatory mode — but are they? What are policymakers and regulators actually doing when they outsource the analysis of such systems to actors or audit ecosystems, and under what conditions might it produce good regulatory results? Is the AI Act's conformity assessment really the same kind of beast as the Digital Services Act or Online Safety Act's risk assessment, or the GDPR's data protection impact assessment? Is this just kicking the can on value-laden issues, like fairness or transparency, representativeness or speech norms, down to other actors, because legislators don't want to do it?

In this discussion, three scholars of these systems will compare and contrast different regulatory regimes concerning AI with a focus on how actors within them can understand the systems around them. Does the outsourcing of the analysis of how AI systems work make sense, and is it given to actors with the position and analytic capacity to do it, or might it lead to regulatory arbitrage or even failure?

About the speakers

Margot Kaminski
Margot Kaminski is a Professor at the University of Colorado Law School and the Director of the Privacy Initiative at Silicon Flatirons. She specializes in the law of new technologies, focusing on information governance, privacy, and freedom of expression. Recent work salient to this panel includes Margot Kaminski, ‘Regulating the Risks of AI’ (2023) 103 Boston University Law Review 1347.


 

Michael Veale UCL
Michael Veale is an Associate Professor in Digital Rights and Regulation and Vice-Dean (Education Innovation) at UCL Faculty of Laws, and Fellow at the Institute for Information Law at the University of Amsterdam. He specialises in the tensions between, law and policy, emerging technologies and power, with work spanning platform regulation, artificial intelligence, encrypted computing systems and online tracking. Recent work salient to this panel includes Robert Gorwa and Michael Veale, ‘Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries’ (2024) 16 Law, Innovation and Technology.

Jennifer Cobbe
Jennifer Cobbe is an Assistant Professor in Law and Technology in the Faculty of Law at the University of Cambridge, Deputy Director of the Centre for Intellectual Property and Information Law, and a Fellow of the Lauterpacht Centre for International Law. Her interests lie in critical, interdisciplinary work on questions of power, political economy, and the law around internet platforms and informational capitalism, technological supply chains and infrastructures, and AI and automated decision-making. Recent work salient to this panel includes Jennifer Cobbe and others ‘Understanding Accountability in Algorithmic Supply Chains’, Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery 2023).

About the Chair

Andrew Strait (Ada Lovelace Institute)
Andrew Strait is an Associate Director at the Ada Lovelace Institute and is responsible for their work addressing emerging technology and industry practice. Prior to joining Ada, he was an Ethics & Policy Researcher at DeepMind, where he managed internal AI ethics initiatives and oversaw the company’s network of external partnerships.

 

 

Booking

This event is free of charge and we have have both in-person and online tickets available.

 

Book your place