XClose

UCL Department of Science, Technology, Engineering and Public Policy

Home
Menu

Policy brief: AI-enabled future crime

Artificial Intelligence (AI) technologies could be exploited for crime. This briefing ranks 20 potential AI-enabled future crimes. Fake audio and video content is perceived to be the biggest threat.

AI-enabled future crime policy briefing

Download the AI-enabled future crime policy brief [PDF]

This study identified 20 applications of AI and related technologies which could be used for crime now or in the future.

Future crimes were ranked as either low, medium or high concern in relation to the harm they could cause, the criminal profit (achieving a financial return, terror, harm or reputational goal), the achievability of the crime and its difficulty to defeat.

Six crimes were identified as most concerning: audio and video impersonation, driverless vehicles as weapons, tailored phishing, disrupting AI-controlled systems, large-scale blackmail and AI-authored fake news.

Funder & Key Contributors:
This work was carried out by the Dawes Centre for Future Crime at UCL. This briefing was produced in partnership with Florence Greatrix at UCL STEaPP’s Policy Impact Unit. The research was funded by the Dawes Centre for Future Crime at UCL.

Lead researchers:
Professor Lewis Griffin, Dr Matthew Caldwell (Department of Computer Science), Professor Shane Johnson (Department of Security and Crime Science)

Output type:
Policy briefing