Why we Need to Manage the Risk of AI Browser Extensions

Written by

AI has been around for decades. But the remarkable ascent of ChatGPT over recent months has propelled the technology fully into mainstream consciousness. It’s also elevated the conversation for IT security leaders about what represents acceptable risk in this space. Among all the possible risks AI poses to enterprise security, AI-based browser extensions are some of the most concerning.

In a world of hybrid working and productivity-at-all-costs, we must improve our readiness and capabilities for managing this emerging risk.

The browser is where employees work almost exclusively today.

Where’s the Risk?

It’s impossible to put a figure on how many AI-themed browser extensions there are in the popular browser ecosystems. A quick search on Chrome Web StoreMicrosoft Edge Add-Ons and Firefox Add-Ons reveals thousands. They extend the browser’s capabilities to help users improve their grammar and spelling, simplify workflows, mine data from web pages, translate text and even convert plaintext to code. 

But as helpful as these tools can be, the risks are immediately obvious. Many extensions are fake or compromised, hiding malware. Beyond those, some tools access sensitive corporate data, including emails. Some do this aggressively, with little or no warning in the terms & conditions. In this way, users may unwittingly share this information with the AI model, making it public to others.

In a recent example, Samsung was forced to temporarily ban employee use of generative AI after developers accidentally leaked source code in this way. When we factor in third-party breaches, the risk multiplies, as an AI company and/or the extension provider could be breached, possibly via a vulnerability exploited in its code.

Security commentators have also warned of prompt injection attacks, where web pages are crafted to steal corporate information via users’ plugins if generative AI tools crawl those specific pages.

Time to Update Those Policies

The challenge from a cybersecurity and risk perspective revolves around that age-old balance between productivity and security. IT teams don’t want to ban AI outright (this will drive it underground, not stop it). Browser extensions are one example of a risk that, if they are helping users add value to the organization, are a great addition. But equally, IT teams can’t allow an extension to act as a backdoor to share sensitive data if that’s part of its intended design. 

One compromise could be allow-listing, where vetted and approved extensions are allowed according to the role and risk of the user and system. This will require IT teams to gain insight into the use of personal and corporate devices connecting to your key systems and build upon any existing System Classifications mapping already conducted. Has your IT team done any risk assessment of AI extensions yet? How have they assessed the risks to your business? 

Google, Microsoft and others could also help by improving their vetting of extensions and adding more granular advice about privacy/security risks so that IT administrators can make better informed decisions. Ultimately, these decisions will depend on the risk appetite of the organization. But with AI on the march, the very least security teams must do is consider the risks faced from aggressive AI tools, the data potentially exposed and build greater awareness about browser extension risk into employee training and user education programs.

What’s hot on Infosecurity Magazine?