Ban use of facial recognition by UK police, House of Lords said
British police continue to deploy facial recognition technology disproportionately without any clear legal basis and highly questionable effectiveness, according to expert witnesses during a House of Lords investigation.
In testimony presented to the Lords’ Home Affairs and Justice Committee regarding the use of advanced algorithmic tools by law enforcement, experts questioned the proportionality and effectiveness of how the Facial recognition technology has been deployed by the Metropolitan Police Service (MPS) and South Wales. Police (SWP).
Silkie Carlo, director of the civil liberties campaign group Big Brother Watch, said that in five years the MPS had achieved just 11 positive matches using live facial recognition (LFR) technology, following of test deployments that began in 2016 at the Notting Hill Carnival and ended. in February 2019 with two deployments to Romford, before fully operational use begins in January 2020.
“During this time, they have scanned hundreds of thousands of people on the streets of London and created a lot of mistrust among the communities, not least by repeatedly deploying at the Notting Hill Carnival – I think there is inevitably a racial element to that – and deploying on several occasions in the borough of Newham, which is the most diverse borough in London, ”she said.
“Not only did they only have 11 real games, but they generated a lot of false positives. Their current rate across all deployments is 93% false positives, so I find it hard to see a world in which this could be classified as something almost proportionate.
Regarding the MPS LFR trials, Karen Yeung – interdisciplinary professor of law, ethics and computer science at Birmingham Law School – described the scientific methodology of force as “very loose”, noting that because procedures were changed each time that a trial was being conducted, “we don’t have a stable and rigorous data set based on these experiences.”
She added: “In those 11 trials, 500,000 faces were scanned to produce nine to ten arrests, and many of them were wanted for very trivial offenses. All of this means real-time location tracking of several, several hundred thousand Brits going about their legal activities, without disturbing anyone.
“This is a serious overthrow of the presumption that a person has the right to go about their business legally, undisturbed by the state, and I fully support Silkie’s view that this should be subject to very, very strict regulations, if not outright ban.
Yeung further noted that, unlike LFR trials conducted in Germany and France, the MPS tested the technology on actual suspects.
“In other Western European countries they have used volunteers to test the accuracy of this data, and they have a full database of people walking past the cameras – this was not the case in London. , they carried out trial operations, ”she declared.
She added that although the MPS claimed to comply with data protection legislation, the documents that have been released so far “are seriously deficient, in my opinion, in terms of the extent to which they have claimed to be operational objectives, and the issue of impact assessment and proportionality ”.
Yeung said any conclusion about the success of the MPS LFR experiments is not supported by the available evidence.
A questionable legal basis
In terms of the legal basis used by UK police to justify their facial recognition deployments, Carlo echoed the UK’s former Biometric Commissioner call for an explicit legal framework, noting that there is no currently no specific legislation governing the use of the technology, and that the police claim the “common law context, human rights law and data protection law” allows them to use it.
In response to the opinion of the Science and Technology Committee July 2019 Report, which called for a moratorium on the use of the LFR by police until an appropriate legal framework is in place, the government claimed in March 2021 – after a delay of almost two years – that there was “already a complete legal framework for the management of biometrics, including facial recognition”.
The government said that this framework included the powers of the police at common law to prevent and detect crime, the Data Protection Act of 2018 (DPA), the Human Rights Act of 1998, the Act on Equality 2010, the Police and Criminal Evidence Act 1984 (PACE), the Protection of Freedoms Act 2012 (POFA) and policies issued by the police.
Carlo said that in terms of retrospective facial recognition (RFR), for which the MPS is expected to roll out a new system over the next three months, “this is again in a complete regulatory and safeguards gap … you can use this with body worn cameras, you can use it with CCTV – the possibilities are great and truly endless… it goes as far as the imagination stretches.
“I think there should be a moratorium on the retrospective facial recognition technology that police forces are acquiring, which not only allows them to compare an isolated image with the image database of custody, but effectively allows them to do any sort of facial recognition matched footage with potentially any type of database; this is a much broader type of use of the technology.
A problem-free solution
According to Yeung, a key issue with police deployment of new technology – including facial recognition and algorithmic crime “prediction” tools such as the MPS Gang Matrix or the Nuisance Risk Assessment Tool of Durham Constabulary (Hart) – is that the authorities began to use them. “Just because we can… without clear evidence” of their effectiveness or impacts.
As with facial recognition technology, Yeung said the development of crime prediction tools has been equally weak, with historical arrest data being used to determine who is likely to commit a crime.
“Just because someone is arrested doesn’t mean they are charged, let alone sentenced, and there are all these crimes for which we have no arrests,” she said. “And yet these tools are used in Britain on the basis that they generate predictions of recidivism – we should at least label them as predictors of re-arrest.”
Yeung further noted that the use of such technologies by the police has the potential to massively reinforce existing power gaps within society, as “the reality is that we have tended to use the historical data that we have. , and we have data in the masses, mainly on people from disadvantaged socio-economic backgrounds ”.
“We are not building criminal risk assessment tools to identify insider trading, or who is going to commit the next kind of corporate fraud because we are not looking for those kinds of crimes,” Yeung added.
“It’s really pernicious – what happens is we look at big data, which is mostly about the poor, and we turn it into prediction tools on the poor, and we leave whole swathes of the society spared by these tools.
“This is a serious systemic problem and we need to ask these questions. Why don’t we collect data that is perfectly possible today on individual police behavior? We may have found rogue individuals who are inclined to commit violence against women. We have the technology, we just do not have the political will to apply it to control the exercise of public authority.