Police tech could be violating human rights

8 February 2019, 07:34 | Updated: 8 February 2019, 07:55

Facial recognition technology

Researchers at the University of East Anglia say facial recognition technology that is being trialled by some police forces could violate human rights.

The tech analyses a person's facial features and compares it to images already stored - in order to identify known offenders.

But there's currently no law to stop police forces taking images from our social media to create 'watch lists'.

Researchers want FRT to be subjected to more rigorous testing and transparency.

The technology was first tested in public gatherings in 2014, when Leicestershire Police trialled a 'Neoface' facial recognition system, later using the technology to identify 'known offenders' at a music festival with 90,000 concertgoers.

The Leicestershire Police and the other two forces trialling FRT - the Metropolitan Police Service and the South Wales Police - argue the technology is lawful and its use in surveillance operations is proportionate. But researchers from UEA and Monash University in Australia say the technology could violate human rights. 

They argue there has not been sufficient statistical information about the trials made publicly available for scrutiny. The limited outcomes that have been shared, the researchers say, have shown high false-positive identification rates and a low number of positive matches with 'known offenders'.

Dr Joe Purshouse, a lecturer in criminal law at the UEA School of Law, said: "These FRT trials have been operating in a legal vacuum. There is currently no legal framework specifically regulating the police use of FRT.

"Parliament should set out rules governing the scope of the power of the police to deploy FRT surveillance in public spaces to ensure consistency across police forces. As it currently stands, police forces trialling FRT are left to come up with divergent, and sometimes troubling, policies and practices for the execution of their FRT operations."

A key concern of the researchers is around the 'watch list' databases of facial images assembled from lists of wanted suspects and missing persons, but also other 'persons of interest'. There is no legal prohibition of police forces taking images from the internet or social media accounts to populate the 'watch lists'.

There is a risk that people with old or minor convictions could be targeted by FRT, as well as those with no convictions whose images are retained and used by police after an arrest that did not lead to a conviction.

The accuracy of the technology has been brought into question by the researchers, leading to concerns that some individuals might be disproportionately included on 'watch lists'. 

The limited independent testing and research into FRT technology indicates that numerous FRT systems misidentify ethnic minorities and women at higher rates than the rest of the population. 

A disproportionate number of custody images are of black and minority ethnic groups, and as these images are routinely used to populate FRT databases, there is a particular risk that members of the public from black or ethnic minority backgrounds will be mistakenly identified as 'persons of interest'. 

Dr Purshouse said: "There appears to be a credible risk that FRT technology will undermine the legitimacy of the police in the eyes of already over-policed groups."

The police forces trialling FRT say the technology has been effective in preventing crime and ensuring public safety. The researchers say that currently there is no meaningful way of measuring success, but that the technology might be deterring those who could pose a threat to the public from attending gatherings where FRT surveillance is known to be in use.