By Ray Walsh
Last week, the London Metropolitan police once again took real-time facial recognition to the streets of East London.
The invasive facial recognition cameras were set up on top of a police van outside of Stratford station and the adjacent Stratford Centre.
Until recently, real-time facial recognition like that used by the Chinese government was considered a mechanism for totalitarianism, but now the Met is attempting to normalise its use in Britain. They're ploughing forward with the use of facial recognition despite warnings from rights groups, lawmakers and human rights experts that the technology is inaccurate at best, and inherently racist at worst.
In 2018, testing in Wales resulted in false positives 92% of the time. With such a low success rate, it seems fair to question the validity of using tech that ultimately stands to alter the fabric of society. What's even more disturbing is that those algorithms were revealed to unfairly discriminate when tracking those from Black and Minority Ethnic (Bame) backgrounds, making its use in racially diverse London neighborhoods such as Stratford likely to result in excessive levels of racial profiling, something the Met have been criticised for long before the invention of facial recognition.
Facial recognition algorithms are only as clever as the data used to develop them. And those algorithms have been shown to be woefully incapable of scanning the public without prejudice.
Real-time facial recognition fundamentally alters the definition of a public space and massively encroaches on human rights in the process. It turns public spaces into a virtual identity parade. Police claim they will only be looking for persons of interest, but the scope for abuse is massive.
Little to nothing is known about how innocent people's faces will be processed and insufficient legislation is currently in place to rein in authorities' power to use facial recognition as an excuse for further surveillance.
What we do know is that citizens' biometric scans have massive value when it comes to further developing and improving facial recognition algorithms. This alone suggests that innocent people's biometrics could be leveraged in one way or another. As a result, innocent people's face scans could be held on file – putting them not only at risk of hacking, but at the authorities' disposal should they decide to track everybody in the future.
Any promises made now about how the technology will be used could be altered later. In the eyes of an algorithm, a person's face never really changes – meaning that if you walk through Stratford today your face could be lodged in the UK's system forever. With technologies like Clearview already being developed, the day is quickly approaching when citizens can expect their face to be known at all times and in all places.
Ray Walsh digital privacy expert at ProPrivacy.com.
The opinions in Politics.co.uk's Comment and Analysis section are those of the author and are no reflection of the views of the website or its owners.