Advocacy Group Calls for FDA-Style Facial Recognition Regulator

Written by

Facial recognition technology can help do everything from authenticating access through to ensuring that people are paying attention in meetings and even spotting potential medical conditions. Like many enabling technologies, though, it also comes with a potential downside; organizations can misuse it, applying it in their own interests rather than those of the faces’ owners.

There hasn’t been a cohesive response to this problem. Instead, we have an inconsistent patchwork of legislation that makes it difficult to use facial recognition technology consistently. Now, an advocacy group has called for a FDA-style federal office to make the rules clear and consistent once and for all.

The call comes in the form of a report from the Algorithmic Justice League, a non-profit that seeks to raise awareness about the impacts of AI. It outlines some concerns around the unregulated use of facial recognition technology.

One of these worries is privacy violation, where identification information is sold to a third party without the subject’s consent. We’ve already seen this happen with Clearview AI, the US technology company that scraped billions of facial images from websites including social networks to sell on to law enforcement organizations.

The second, linked to the first, is violation of intended use, where images are used for purposes outside their original scope. These use cases might not be appropriate for the quality of the image, the report warns.

Third is performance assessment, where software may overestimate its own accuracy, and the fourth is population modeling, where the software may not work properly for certain subgroups. An example of this might be the over-representation of white faces in a training set, for example, which could influence recognition accuracy for people of color.

Various regions have already passed legislation governing the use of facial recognition, with calls for citywide bans in some areas. Federal legislators have also weighed in. The problem is that no one piece of legislation covers everything, the authors warn. Some bans are only temporary, for example.

“Domain specific laws that address exclusively either public or private sector use can leave unaddressed the critical interface with private companies and vendors supplying government agencies with FRTs [facial recognition technologies],” they warned. “Private companies that operate internationally have no obligation to remain loyal to US interests.”

The researchers pointed to the Federal Drug Administration (FDA) as a model for a federal office that could govern the use of facial recognition technology. They highlight some key concepts that the FDA follows, such as the classification of proposed medical devices according to their risk based on their intended use, and the use of investigation procedures to gather more data about new medical devices with no market precedent. Those kinds of models might translate well to fast-evolving facial recognition technologies, they pointed out.

A federal facial recognition office could create rules that indicate under what conditions specific facial recognition technologies should be used (and, conversely, when they shouldn’t be applied).

The technology is developing so quickly, and with such widespread implications. Isn’t it time that we considered a system of checks and balances at a national level to ensure that technocrats don’t apply it in ways that could be harmful to public health?

What’s hot on Infosecurity Magazine?