Microsoft Hails Generative AI as the Next Security Revolution

Written by

Microsoft is at the heart of the AI revolution having invested billions of dollars in its partnership with ChatGPT creator OpenAI. As well as bringing generative AI to every-day users, the technology behemoth also acknowledges that generative AI is key to combatting cybercrime.

Recognizing the potential of generative AI to enhance security operations, the tech giant launched the Microsoft Security Copilot tool in March 2023. Copilot is designed to assist security teams by automating tasks, aggregating security data and alerts from virtually any source and generating actionable responses rapidly to any queries.

Alym Rayani, VP of Security Go-to-Market at Microsoft, said combatting the rapidly changing cyber threat landscape is the “fundamental and defining challenge of our time” and generative AI will be crucial in this battle.

Infosecurity spoke to Rayani during Microsoft’s Envision UK event in London on October 18 to discuss Microsoft’s approach to security in the era of AI. Rayni also provided an update on Copilot, and its latest updates.

Infosecurity Magazine: What is Microsoft Security Copilot?

Alym Rayani: We announced Microsoft Security Copilot in March 2023, and have had a few customers in ‘private access’ trying it so far. The first scenarios were about bringing generative AI to the security operations center (SOC) and helping those teams with their work. For example, telling them about their posture e.g. are their devices healthy or providing details on a PowerShell script.

We also have our Microsoft Sentinel product, which Copilot is integrated with. That’s what a lot of our customers use as their foundations for security investigations and operations centers.

Copilot is a generative AI tool, the signals and research we do is all part of that tool. It’s about understanding device health, and threats like the signals relating to phishing and is curated with a set of security modelling. It’s not just a generic large language model – it is curated on ChatGPT-4 – but it’s also combined with other Microsoft tools, all our data, and then delivers that experience to inform the person operating it.

The ability of Copilot to turn tasks that require coding and time while the clock is running on an incident into a very short timeframe, is amazing. Attacks move fast nowadays, so for Copilot to be able to write that query quickly is impactful for remediating and addressing attacks.

IM: What are the latest updates relating to the Copilot product?

AR: One of our announcements on October 19 is around expanded access to security Copilot. Customers can now enrol and purchase early access to Security Copilot.

With the early access program, we’re going to announce integration with our XDR platform, Microsoft 365 Defender, which gives this tool a new set of capabilities.

What does that mean for someone who’s working in Defender and operating the extended detection and response practice within their security organization? They’ll get an incident summary and can ask CoPilot about that incident, such as ‘what is happening with this endpoint’ or ‘give me a guide on how to deal with this incident’. They’ll be able to use generative AI in the context of the investigation of an incident using Defender. That’s very powerful and is going to change the time to be able to understand an incident and then effectively remediate it.

In the hunting aspect of Defender, queries are a critical part of hunting from a security operations perspective. The ability of Security Copilot in Defender to answer those queries speeds up that cycle through that investigation.

One of the other exciting examples is real-time malware analysis – being able to reverse engineer malware, and then get some insights. That’s something where you need deep and skilled security experts today, and organizations struggle to hire enough of those individuals. With Security Copilot integrated into Defender, you can start doing that straight away.

We’ve also announced that we’re going to include Microsoft Defender Threat Intelligence in Copilot. That works with deep integration across products so it empowers that knowledge graph on top of the Defender capabilities of managing endpoints and remediating issues.

We’ve been on a rapid development cycle, we got a lot of feedback from the private access program, and we’re looking forward to that next round of feedback. 

"People are going to have to reinvent processes for AI because it’s moving so fast"

IM: In what other ways has Microsoft’s approach to security evolved in the AI era?

AR: Our approach to security comes down to the three foundational things of how we invest. We have to continue investing in the signal function – we’ve gone from 8 trillion signals two years ago to 65 trillion. We’re trying to build this security data leg of signal, everything from audit logs to endpoint health.

The human element is really the threat research that happens. Investing in understanding nation-state attacks and threat actor groups, we’ve got to continue that.

Also, continuing to build out across our six product lines we have across security by listening to customers, taking their feedback and continuing to experiment.

The last piece is how do we leverage generative AI in these scenarios, and how do we use this attention on AI as a platform to get the message out about responsible AI? At Microsoft we’ve been fortunate to get a lot of attention on AI, and we also feel like it’s our responsibility to the world to evangelize and tell the story of how AI needs to be developed and deployed responsibly.

IM: How can organizations effectively implement generative AI security tools into their existing technology stack?

AR: Some of the advice we’ve had for a little while, which is make sure you have the right security protocols in place. You don’t want to have gaps when you get into the area of AI. That means have a great data security plan and make sure you have conditional access for identities.

Also, figure out the places and the vendors that you trust to go on the AI journey with. Those two things allow an organization to transform itself. Your AI solutions are going to be reflective of your security posture, and continually leveraging AI to harden your security posture is important.

IM: What are the biggest risks with using AI, and how can these be mitigated?

AR: If organizations are experimenting with AI, that’s a great thing. But are they experimenting in a security-centric way? You don’t want information that could be confidential about your customers or your own organization to be out in the wild. It is a pitfall to go out and use any AI solution without guardrails. They’ve got to make sure that they’re safeguarding their information and business in the same way you use any vendor for anything.

If you don’t then AI will reason over it and it may not be in a way you like. That’s one of the reasons we feel like it’s our duty to talk about responsible AI. If you aren’t using a vendor that is leveraging those principles, then you have data out in the wild.

I also want people to realize that AI is meant to assist humans doing their work.

Fortunately, customers are asking the right questions, like how can [Microsoft] ensure responsible AI and I want to make sure this is helping humans because humans are accountable for what happens.

IM: What are the biggest AI-based threats organizations are facing today, and how can they keep up with the evolving threats?

AR: Malware generated by AI, phishing emails getting more sophisticated and deepfakes where you’ve got voices being generated, which is very scary. It’s important to get the right tools in place, because they can already address a lot of these scenarios. If you have a phishing protection, like Defender for 365, then it’s already analyzing these things and will flag these emails.

Get the basics down and do them well. Then when you get into some of the more advanced threats, maybe take the next stage in your maturity model as a security organization. If deepfakes are being used, then are you implementing risk-based conditional access, for example. Because you’re not just using a person’s voice or their password, you’re using location, what kind of data are they getting access to. There’s a lot of different things that you can do to mitigate that.

IM: What has surprised you most about the impact of AI on cybersecurity so far?

AR: There are two things: how amazing the technology can be. As a technology person, the power of AI was the first thing that surprised me. When I saw what the world will see tomorrow, I was like ‘wow.’ The other thing is the speed of innovation that our teams have gone on to deliver this thing – it’s unprecedented.

People are going to have to reinvent processes for AI because it’s moving so fast.

What’s hot on Infosecurity Magazine?