Governance Gap Raises AI Security Concerns

Written by

Cybersecurity is now viewed as the most critical factor in AI adoption, but governance needs to catch up with the potential risks associated with the technology, a new study from Juniper Networks has revealed.

The networking vendor polled 700 AI managers in global companies to compile its report, AI adoption is accelerating – now what?

It found that, while 63% of respondents believe they are “most of the way” to their planned AI adoption goals, cyber remains a major risk factor.

Whereas in last year’s report, AI tool capabilities (32%) and data availability (27%) were listed as the most important factors in enabling adoption, this year cybersecurity (29%) emerged as the clear leader, after being cited by just 14% in 2021.

In line with this thinking, a clear majority of respondents argued that when AI doesn’t receive appropriate oversight, it is “accelerated hacking” and terrorism (55%) and privacy (55%) which emerge as the biggest risks to organizations.

That’s why nearly all (95%) AI leaders agreed that in order to minimize potential negative impacts, companies must have policies in place for AI governance and compliance.

Unfortunately, many are falling behind: just 9% said their AI governance is mature.

However, this is likely to change over the coming years, according to Juniper Networks' global security strategist, Laurence Pitt.

“In recent years, many European governments have stepped in to regulate the collection, storage and usage of data, spurring organizations to take a more proactive approach to internal AI governance to stay ahead of legislation and allow their AI solutions to expand safely,” he argued.

“As a result, organizations are developing comprehensive AI and data governance policies to protect against financial and reputational loss. As AI use continues to grow, we will see more being done to effectively govern and secure it.”

What’s hot on Infosecurity Magazine?