Can Generative AI Shrink the Cybersecurity Skills Gap?

Written by

Not a day goes by that a vendor doesn’t tell a CISO they can help solve the cybersecurity skills gap challenge. It has become the common value proposition of every new security product and service coming to market, and for good reason.

The skills gap is real. As organizations realize they must invest in security, demand continues to increase, yet the supply of cybersecurity experts is not growing at the pace required.

While security vendors’ focus on shrinking the skills gap could be viewed as capitalizing on fear, uncertainty and doubt (FUD), I believe most approach the conversation sincerely hoping to improve the problem. After all, we are only as secure as our weakest link in the supply or data collection chain.

The holy grail vendors have been chasing is to democratize security for all by demystifying the secretive and specialized world of security analytics so any user familiar with the business can triage and remediate attacks. Years ago, the introduction of digital assistants that could interpret natural language and complete multi-step tasks changed the face of the home consumer. Assistants like Siri, Cortana, and Alexa made it easy to interact with technology and enabled non-tech-savvy individuals to take control of their digital domain.

Generative AI 

We saw some try to apply this type of interpretation to the cyber security domain, including my team at Endgame (now Elastic) with Artemis. While imperfect, it did help tier 1 analysts complete more advanced tasks and accelerated tier 2+ analysts to do more with their limited time. And now, we’ve seen the next evolution of this digital domain-specific assistant with the integration of generative AI. When Microsoft announced Copilot, powered by OpenAI’s GPT-4 generative AI, the dream of making any user a cyber analyst became a step closer to reality. And they are not the only ones in the game of generative AI applied to specific domains, with Google Bard and others hot on their heels.

Of course, with excitement also comes concerns. Generative AI follows the same philosophy as other analysis processes—garbage in, garbage out. The data in the training model could be maliciously attacked by adversaries or inadvertently corrupted by bad inputs from other users. We must be vigilant about the answers these solutions provide and always view them with some scrutiny. However, is it any different than a team of security experts today? The idea of quality control already exists, with a tier 1 analyst recommending an action and a tier 2+ analyst approving the action. As long as we keep a business expert in the loop on any impacting changes, we should fully embrace generative AI as an accelerant to making teams more successful.

We have only begun to scratch the surface of where domain-specific generative AI can help, but some potential applications are already understood:

  • What should I do next? Generative AI has shown its ability to utilize massive datasets for recommending the next steps to a user when an alert or incident is identified. This has been the dream of security teams for years, with the application of AI significantly improving the daily workflow of an analyst.
  • Simple searching AKA Who cares about specs? When I ran a team of experts in a SOC at a large government organization, they didn’t know how to search for something across the breadth of different vendor solutions. Despite the push for more openness and shared schemas, vendors still have unique implementations and search languages. But shouldn’t searching across security tools be as simple as a Google search? Now it is, with generative AI translating a search into any vendor-specific language of choice.
  • Knowledge sharing and closing the OODA loop. There is much value in incident response, but it is often challenging to operationalize that data back into the cyber security practice. Imagine a world where generative AI uses incidents as a training set to enrich future investigations based on previous activity. For example, “Hey, this host in the alert was involved in an incident last week, which means it may not have truly been sanitized.”

AI models have been updating and evolving faster than we ever thought possible, and there are countless applications in the cybersecurity domain that will continue to (positively and negatively) disrupt the lives of security analysts everywhere. The payoff to the promise of generative AI is cyber defense becoming integral to an entire organization and something that everyone can help achieve.

Historically, security leaders’ most significant challenges have been the lack of cyber security awareness across the business and a shortage of security defenders. But knowledge changes everything. Knowledge is the cornerstone of democracy, empowering everyone to make better decisions. Generative AI helps make cyber security accessible, exposing more people to the domain and bringing knowledge closer to every user. We can now truly democratize security.

Brought to you by

What’s hot on Infosecurity Magazine?