XClose

UCL Grand Challenges

Home
Menu

Investigating the properties of intent for AI

The project pursued a novel question and measured how jurors would tackle the idea of intent within AI.

Focus group working and listening with laptops

21 October 2024

Grant


Grant: Grand Challenges Doctoral Students' Small Grants
Year awarded: 2020-21
Amount awarded: £2,100

Project Team


  • Henry Ashton, UCL Engineering Science
  • Matija Frankli, UCL Brain Sciences

Research shows that people's inferences about the intent behind actions, whether performed by humans or AI, are surprisingly consistent. This study seeks to further test these findings and refine an algorithmic definition of intent that is easily understood by the general public. Additionally, it aims to investigate how judgments of AI intent are reflected back onto its programmer or owner, offering new insights into the connection between AI behaviour and human accountability.

The funding for this research allowed the researchers to conduct three related surveys administered through prolific and designed in Qualtrics. Participants were asked to imagine they were jurors judging whether a human or AI pilot had broken the law by flying into a restricted zone.

Outputs and Impact


  • A project has since been run to measure Lay-people's judgements of culpability for harm with respect to AI vs humans according to the common law's definitions of Purpose, Knowledge, Recklessness and Negligence. This has been submitted to a experimental psychology conference.
  • AI caused harm will begin to be considered by the court and this work is an early example of experimental jurisprudence on the subject. 
  • This work is first of its kind and thus furthers UCL's reputation as an academic leader in the Area of AI and Experimental Psychology.