How Can the Law Keep Up With Cyber-Attacks on AI?

Written by

What happens when technology moves more quickly than the law? That was the question facing speakers at the RSA Conference last month. A panel of experts from Microsoft, the Aspen Tech Policy Hub and Google Brain explored how the law can keep up with attacks on AI systems.

The problem is that while the law moves at a plodding pace, AI technology is evolving so quickly that legal frameworks aren’t keeping up with it. That makes it difficult to prosecute cases against people that deliberately manipulate AI systems for their own benefit, experts said.

“The beauty of law and policy is that we have to follow the technology,” said Betsy Cooper, director of the Aspen Tech Policy Hub. “That’s not set yet. So until we have a clear understandings of what the technology can do and how it can be manipulated, the technology will be ahead of where the law is.”

The speakers identified several kinds of attack on AI models, some of which have already been seen in the wild. Evasion attacks alter the information that they feed to an AI algorithm so that the AI misinterprets it. They could alter a stop sign so that a self-driving car doesn’t read it properly and breezes through an intersection, say, or make a handwriting recognition system read one word as another. Attackers do this by changing small parts of the signal at a time, testing the AI models’ reaction to understand how it works.

How can lawyers prosecute someone for manipulating input data in these ways? Cristin Goodwin, assistant general counsel at Microsoft, said that decades-old legislation like the Computer Fraud and Abuse Act aren’t well equipped for those tasks.

“We have to expect that if the technology hasn’t caught up, and the law hasn’t caught up, there is going to be a lot of room for maneuver, which is precisely what we’re seeing today,” she said.

Another attack, model poisoning, attempts to change how AI interprets things by affecting its training data, altering the information that it learns from. Many models gather their data from public information on the internet. If an attacker can influence that data, they can change the model’s ‘thinking’ patterns. One example that came up during the talk was Tay, the chatbot that Microsoft launched in 2016, which Twitter users deliberately taught to be racist and sexist by feeding it inappropriate comments.

Cybersecurity risks also spill over into intellectual property theft. In a model stealing attack, an adversary sends many inputs to an AI algorithm and analyzes how it interprets them. They could use this analysis to reverse engineer the training model, helping them to develop their own commercial service, said Nicholas Carlini, research scientist at Google Brain.

Companies will need to build terms of service that explicitly outline what someone is allowed to do with their AI systems, warned Cooper. That means working with lawyers who understand the technology.

It will also mean reframing policy so that it can move more quickly with the times. “We basically need to build into our systems of policy making the flexibility so that when you pass a law, it does not mean that you have to wait for the next window of opportunity, 30 years later, to change it,” she concluded. “It needs to take into account changing technology in a dynamic way.”

What’s hot on Infosecurity Magazine?