ICO Warns of Fines for “Nefarious” AI Use

Written by

The UK’s privacy regulator has warned of falling public trust in AI and said any use of the technology which breaks data protection law would be met with strong enforcement action.

Speaking at techUK’s Digital Ethics Summit 2023 on Wednesday, information commissioner, John Edwards, pointed to organizations using AI for “nefarious purposes” in order to harvest data or treat customers unfairly.

“We know there are bad actors out there who aren’t respecting people’s information and who are using AI to gain an unfair advantage over their competitors. Our message to those organizations is clear – non-compliance with data protection will not be profitable. Persistent misuse of customers’ information, or misuse of AI in these situations, in order to gain a commercial advantage will be punished,” he stated.

“Where appropriate, we will seek to impose fines commensurate with the ill-gotten gains achieved through non-compliance. But fines are not the only tool in our toolbox. We can order companies to stop processing information and delete everything they have gathered, like we did with Clearview AI.”

The Information Commissioner’s Office (ICO) fined Clearview AI £7.5m ($9.4m) last year for breaching UK data protection rules. However, the facial recognition software vendor subsequently won an appeal against the fine after a tribunal agreed that processing of data on UK citizens is only done by Clearview customers outside of the EU – basically law enforcement agencies in the US.

Read more on AI and privacy: #DataPrivacyWeek: Consumers Already Concerned About AI’s Impact on Data Privacy

Edwards also told attendees at the conference of his fears that public trust in AI could be waning.

“If people don’t trust AI, then they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole,” he argued. “This needs addressing: 2024 cannot be the year that consumers lose trust in AI.”

To maintain public trust in the technology, developers must ensure they embed privacy in their products from the design stage on, Edwards said.

“Privacy and AI go hand in hand – there is no either/or here. You cannot expect to utilise AI in your products or services without considering data protection and how you will safeguard people’s rights,” he added.

“There are no excuses for not ensuring that people’s personal information is protected if you are using AI systems, products or services.”

What’s hot on Infosecurity Magazine?