The Information Commissioner’s Office has announced that it has launched an investigation into the use of live facial recognition technology in King’s Cross, London by the Police. This follows investigations the Information Commissioner’s Office has been carrying out into use of the same technology by South Wales Police force.

The recording of citizens in public spaces in the UK is nothing new. Neither is the controversy surrounding this practice, from Orwell’s predictions about Big Brother to the outcry that Google Glass caused not long after its launch. Maintaining an adequate balance between individual privacy and public safety has always been difficult.

The most recent controversy in this area is the creeping use of live facial recognition technology both by public bodies and private actors. Live facial recognition technology is different from traditional CCTV because it does not simply record people but compares their biometric data to that contained in databases.

Data Protection Law

In the opinion of the Information Commissioner’s Office, the use of “software that can recognise a face amongst a crowd then scan large databases of people to check for a match…is processing personal data.” Accordingly, data protection law applies.

Just like the processing of any other personal data, the use of facial recognition technology to process personal data must have a lawful basis and a legitimate aim. In particular, because this data is biometric data, it is “special category” data under the GDPR.

The Police use this technology in an attempt to catch criminals. However, the technology captures thousands of people’s biometric data as they go about their daily lives so it raises privacy concerns and concerns as to how that data is processed.

R (Bridges) v Chief Constable of South Wales Police

The use of facial recognition technology by South Wales Police sparked calls for the force to end this practice. In particular, the United Nation’s Special Rapporteur on the Right to Privacy, Joseph Cannataci, described the force’s use of the technology as “chilling”. In response to the Police’s use of the technology, a local Cardiff resident took legal action supported by the privacy and civil liberties organisation, Liberty.

In brief, Mr Bridges was concerned that the Police captured his image whilst he was out shopping in Cardiff city centre. On the one hand, the Police argue that the use of the technology is proportionate and lawful. On the other, Mr Bridges and his supporters argue that it is indiscriminate and raises serious privacy concerns.

Conclusion

The case was heard over several days in May 2019, with the Information Commissioner’s Office intervening in the case. The judgment is expected later this year so keep an eye on our website for an update on the outcome.

The controversy surrounding this technology is not unique to the UK. The city of San Francisco has recently voted to ban the use of facial recognition technology by local agencies such as police or transport authorities. Other US cities are considering following suit.

Even if the technology is deemed lawful in the UK, issues remain. For example, one study has shown that the technology is prone to error and biased, particularly when dealing with women and people with darker skin.

The use of new technology to keep the public safe is usually to be applauded. However, anyone considering the use of facial recognition technology should seek advice lest they fall foul of data protection laws.

Back to Search