Facial Recognition Technology: Don’t Throw the Baby Out with the Bathwater

Written by

The Metropolitan Police’s latest use of automated facial recognition (AFR) technology in Romford has again drawn criticism from campaign groups. While I can’t comment on the specifics of this particular case, I believe the debate about facial recognition technology is being hijacked by commentators whose views are unfailingly negative.

Meanwhile successes in initial trials of AFR in South Wales and Humberside have made fewer headlines, whilst the use of AFR in mobile phone handsets and as a means of authorizing access to a mobile phone, a secured location, or a country is increasingly common. What we need is critical and informed debate, but what we are getting is cherry-picked information and hyperbole. 

Unfortunately the Government is too timid to say that biometrics is a positive thing if used in the right way. Its biometric strategy has been widely criticized for not providing clear policy and direction. Hence the debate is being brought to the public’s attention by campaign groups who are not looking to present the full picture. As a result, we risk losing the potential benefits of the technology in a storm of protest about specific and perhaps flawed implementations. 

The public are rightly reluctant to hand over their digital data. However, the solution is not to ban facial recognition but to ensure that it’s used properly. We need to have a much wider discussion about the issues that facial recognition and other biometric technologies raise, and in particular about the control of data and data analytics.

The debate may have become so polarized because images are perceived as being more personal, more connected to the individual, than other forms of personal data. However, in practice there is no real justification for making such a distinction. Personal image data should be treated in the same way as any other personal data, and is of course subject to the same controls, primarily through the GDPR. 

The use of facial recognition technology is also governed by the UK’s human rights commitments, and there are more than enough regulators, including the Information Commissioner (ICO), Surveillance Camera Commissioner (SCC) and Biometric Commissioner (BMC). Interestingly the Surveillance Camera Commissioner recently commissioned work to develop a strategy on human rights and civil liberties standards, although this was not formally announced but quietly added as a new workstream to the National Surveillance Camera Strategy for England and Wales.

So the problems with the use of facial recognition technology are not due to the lack of a regulatory and legal framework. They arise when the capabilities of technology are lost in the wider systemic problems of poor organizational processes, lax regulatory standards and inadequate oversight. The real issue with the use of facial recognition technology is the systems which underlie it, such as police databases. 

In my view, the way to address this is to design the actual technology so that it controls the flow of data and how it is stored and deleted. This should be reliable, transparent and in full compliance with data protection legislation. In my opinion, this is crucial if these analytic technologies are to be accepted by the public as legitimate security tools which will help to keep them safe without breaching their human rights. 

Control of the use of the data can be integrated into the hardware and software of AFR systems. This would provide a verifiable and practical means of enforcing the legal framework. Government and industry could work together to develop privacy and data protection standards for architectural regulatory control, and government could require public sector systems to conform to standards as a minimum, leaving the industry to develop further, more sophisticated controls which could give them a competitive advantage.
It is time for serious debate over the regulation of analytics like AFR, so that we can work towards a regulatory framework that balances benefits and risks and takes a wider view of the regulatory options available. This should actively encourage users to self-regulate through the deployment of appropriate technology architectures and systems. 

In my view, facial recognition has clear advantages for certain applications. But if it is to be used legitimately to tackle crime and terrorism without alienating the public, we urgently need to address the current policy void. Use of AFR should be limited to where it is genuinely needed, with effective processes such as privacy impact assessments designed into the technology and properly tested, so that our democratic freedoms and human rights are not abused. 

What’s hot on Infosecurity Magazine?