BlackBerry Cybersecurity President Warns Against Heavy-Handed AI Regulation

Written by

The threats posed by the malicious use of generative AI tools, particularly large language model-based (LLM) chatbots, have pushed various governments to take action.

The EU and Canada are working on legislation to regulate AI practices, respectively the Data & AI Act and the EU AI Act, while the UK and the US put their effort into working hand in hand with AI developers and have yet to announce any binding regulation.

The two latter governments will take part in what has been described by British Prime Minister Rishi Sunak as the “first major global summit on AI safety” in the UK in Autumn 2023.

During Infosecurity Europe, John Giamatteo, president of BlackBerry Cybersecurity, told Infosecurity Magazine what he expects from this upcoming summit, what role the cybersecurity industry should play in securing AI practices and why government intervention should encourage innovation and not stifle it.

Infosecurity Magazine: Threat actors have primarily used large AI chatbots to craft convincing phishing campaigns en masse and create polymorphic malware. Which of these two misuses is the most concerning?

John Giamatteo: The former is particularly worrying me. The fact that they can create more authentic phishing attack schemes and adapt them to target specific victims and make it more likely for an employee to make a wrong decision is concerning.

Especially when considering the landscape in which they can deploy these social engineering attacks now. A decade ago, threat actors only attacked PCs. The attack surface has significantly expanded: mobile phones, servers, the cloud, social media, etc.

IM: How should the cybersecurity industry respond to these novel threats?

JG: Our industry should be more collaborative in general. We’ve come a long way, but we’ve made tremendous progress. Nowadays, the typical enterprise would probably have six or seven security solutions working together. The additional threats that AI poses will only push us to cooperate even more.

The companies that can add more value to the equation are those that have AI expertise, including Blackberry Cybersecurity’s Cylance AI. If you’re a legacy signature-based security company, you’re probably not as well-positioned to contribute to mitigating AI risks.

We should start by providing the right tools and capabilities to the security operation center (SOC) analysts and integrate them into one console to make it more easily usable.

IM: How should governments get involved in mitigating AI risks?

JG: I’m not usually a fan of government intervention and regulation on private technology, but on this one, I think we’re going to see governments get more involved than with other technological innovations.

That’s a good thing because AI has the propensity for more profound changes than many other revolutions, and we need guidelines. The higher the risks, the more involved governments must be; this time, the risks are very high.

Additionally, governments can also spur collaboration. The AI Summit that’s coming up in the UK, where the UK and the US will be leading global standards and parameters of AI, is an excellent example. I’m sure they’re going to enlist many other entities for that mission.

IM: What do you expect from this AI Summit from a cybersecurity perspective?

JG: I’d like to see the organizing countries setting, not regulations, but suggestive parameters and recommendations around how you securely manage this new environment.

A hard-handed mandate telling private companies what to do might be a bit too far at this stage.

I’m also sure they will take some input from security companies, particularly those already leveraging AI.

In some ways, we’re subject matter experts with AI technologies. With the billions of threats that we collectively see with our AI security tools and the millions of endpoints that we protect around the world, we can be very helpful to help to draft these recommendations.

IM: Does it mean the EU, which recently adopted the AI Act with strict restrictions on AI practices, has chosen the wrong approach?

JG: It is not my place to opine on who gets it right or wrong here, but government intervention should certainly encourage innovation and not stifle it.

What I hope for is a collaborative approach. I’d like to see these countries keep an open dialogue, learn from each other, and enable the best innovations.

BlackBerry Cyber Security confirmed it was in contact with the AI Summit organizers.

What’s hot on Infosecurity Magazine?