Top 10 AI Security Stories of 2023

Written by

Generative AI took the world by storm at the end of 2022, thrusting the field of AI into the limelight in 2023.

While AI adoption has skyrocketed, with 35% of businesses using AI in 2023 and 42% exploring its implementation in the future, according to IBM, new concerns have emerged, too.

The Infosecurity Magazine team has been exploring to Generative AI's impact on cybersecurity at great lengths throughout the past year.

Here are our top 10 AI security news stories in 2023.

1. Privacy Concerns Around ChatGPT

A few months after its launch in November 2022, one of the most common enterprise uses of OpenAI’s ChatGPT was drafting privacy notices. Ironically, the AI-powered chatbot itself has been under scrutiny from data protection experts.

The web-scraping method used to data-train the large language model (LLM) ChatGPT is based on raised many questions around personal data collection or dealing with inaccurate data. Infosecurity spoke with AI and privacy experts to discuss whether the chatbot was compliant with existing legislation, including GDPR. We also explored whether its maker, OpenAI, took enough precautions to prevent some of those risks.

2. GPT Models Used For Malicious Purposes

Evidence of the use of ChatGPT for malicious purposes, such as creating polymorphic malware or drafting phishing emails, emerged in early 2023. This led OpenAI and Google, which launched ChatGPT competitor Bard, to implement guardrails to prevent such misuse. However, it seems like these have not been sufficient, since the SlashNext State of Phishing Report 2023 revealed in October that the use of ChatGPT boosted phishing to a 1265% surge in Q4 2023 from Q4 2022.

Credit: Shutterstock/Celia Ong
Credit: Shutterstock/Celia Ong

While some black hat hackers have been utilizing legitimate LLM-based tools, others have started crafting their own malicious generative AI tools. Most of them have been given threatening names, like WormGPT, FraudGPT, WolfGPT, XXXGPT, PoisonGPT or DarkBard.

However, many experts have told Infosecurity that this trend will likely fade away.

Christian Borst, EMEA CTO at Vectra AI, is of this opinion.

He told Infosecurity: “Widespread LLM usage will fade away, but deepfakes will skyrocket. LLMs are typically quite difficult to use because they are unable to understand the context or provide reliable outputs, so the wider practical use of LLMs is restricted.”

However, Borst believes businesses will probably scale back their use of LLMs next year as they wait for these tools to become more functional and user-friendly.

“Threat actors will face the same issues with using LLMs, so we likely won’t see much complex activity like AI generating malicious code. But we can expect cybercriminals to harness generative AI to create more realistic and sophisticated deepfakes. This will give them a better chance of tricking users into giving up sensitive data or clicking on something malicious through more convincing audio or visual phishing lures.”

3. When the LLM Buzz Fizzles Out

The LLM adoption for malicious activities was also challenged in November when a Sophos X-Ops report showed that cybercriminals are so far reluctant to use generative AI to launch attacks.

Examining four prominent dark web forums for discussions related to LLMs, the firm found that threat actors showed little interest in using these tools and even expressed concerns about the broader risks they pose. In two of the forums included in the research, just 100 posts on AI were found. This compares to 1000 posts related to cryptocurrency during the same period.

The researchers revealed that the majority of LLM-related posts related to compromised ChatGPT accounts for sale and ways to circumvent the protections built into LLMs, known as ‘jailbreaks.’

Additionally, they observed 10 ChatGPT-derivatives that the creators claimed could be used to launch cyber-attacks and develop malware. However, Sophos X-Ops said that cybercriminals had mixed reactions to these derivatives, with many expressing concerns that the creators of the ChatGPT imitators were trying to scam them.

The researchers added that many attempts to create malware or attack tools using LLMs were “rudimentary” and often met with skepticism by other users. For example, one threat actor inadvertently revealed information about their real identity while showcasing the potential of ChatGPT. Many users had cybercrime-specific concerns about LLM-generated code, including operational security worries and AV/EDR detection.

4. The Challenge of Detecting AI-Generated Content

As long as cybercriminals are using AI chatbots to create malicious campaigns, even to a lesser extent than originally envisioned, defenders will have a hard time fighting against it.

According to Egress’ Phishing Threat Trends Report, published in October, AI detectors cannot tell whether a phishing email has been written by a chatbot or a human in three cases out of four (71.4%). The reason for this is due to how AI detectors work. Most of these tools are based on LLMs, therefore their accuracy increases with longer sample sizes, often requiring a minimum of 250 characters to work.

Almost half (44.9%) of phishing emails do not meet the 250-character requirement, and a further 26.5% fall below 500, meaning that currently, AI detectors either will not work reliably or will not work at all on 71.4% of attacks.

5. Offensive Cyber to Help Secure Generative AI

In 2023, generative AI makers have tried to show their commitment to secure AI tools. That’s why OpenAI launched a bug bounty program in April, offering white hat hackers up to $20,000 to find security flaws in its products and services.

Watch our Online Summit session: Transforming Security with Pentesting and Bug Bounties

In an exclusive interview during Black Hat Europe in December, Ollie Whitehouse, CTO of the UK National Cyber Security Centre (NCSC), told Infosecurity he was “particularly buoyed£ by the conversation governments and private companies were having around AI.

He added that an initiative like OpenAI’s bug bounty program “shows that we’ve broken the traditional cycle of releasing a product and only starting to secure it once it’s been broken into.”

6. The US AI Regulation Roadmap Takes Shape

With generative AI under scrutiny, governments had to show they were doing something to secure AI systems. At first, the Biden administration seemed to be heavily relying on a self-regulation approach, securing, in July, voluntary commitments from seven generative AI powerhouses – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – to prioritize safety, security and trust in their AI systems.

Some of these same companies went on to form the Frontier Model Forum, an industry body tasked with regulating AI.

However, many experts criticised the efficacy of self-regulation. The US government toughened its approach, starting with an October Executive Order on Safe, Secure, and Trustworthy AI. Notably, the EO introduced the need to establish new AI safety and security standards.

To conduct this task, the Biden administration announced the creation of the US AI Safety Institute during the UK AI Safety Summit in early November. The new institute will be part of the US National Institute of Innovation and Technology (NIST).

In mid-November, the US Cybersecurity and Infrastructure Security Agency (CISA) unveiled the US government’s full AI safety and security roadmap.

Read more: AI Safety Summit: OWASP Urges Governments to Agree on AI Security Standards

7. The UK AI Safety Summit: Achievements and Criticisms

The UK AI Safety Summit, in early November, was the best opportunity for another country, the UK, to flex its muscles and appear as an AI leader while also trying to set the agenda for AI safety standards. The event was criticized for its narrow focus on ‘frontier’ AI models even before it started.

While the discussions stayed high-level, the UK government can boast that the event resulted in some agreements.

The event opened with the Bletchley Declaration, a text signed by 28 countries outlining opportunities, risks and needs for global action on ‘frontier AI’ systems that pose the most urgent and dangerous risks.

It closed with several countries signing an agreement with eight AI providers – Amazon Web Services (AWS), Anthropic, Google AI, Google DeepMind, Inflection, Meta, Microsoft, Mistral AI and OpenAI – to test their future AI models before launch.

The event also allowed the UK government to make a few announcements, including launching its own AI Safety Institute.

Read more: UK Publishes First Guidelines on Safe AI Development

8. The EU Passes Its AI Act With Generative AI-Inspired Tweaks

While the UK and the US tread carefully at regulating AI, the EU was expected to launch the first AI law in the Western world.

Its EU AI Act, already in the pipeline since 2021, had to go through several tweaks following the massive adoption of general-purpose AI models in late 2022.

However, the bloc delivered, with the European Parliament adopting the latest draft of the legislation with an overwhelming majority in June 2023 and the EU institutions signing a provisional agreement in December after three days of ‘trilogue’ discussions. Technical details still need to be fine-tuned, but the AI Act will become law – perhaps as soon as 2025.

Credit: Shutterstock/Ascannio
Credit: Shutterstock/Ascannio

9. One Year of ChatGPT: The Impact of Generative AI on Cybersecurity

On the first anniversary of ChatGPT, Infosecurity spoke to industry experts about how LLM chatbots and generative AI have changed the cyber threat landscape in 2023 and the likely impact over the coming years.

It seemed that, outside of phishing, the impact of ChatGPT on the cybercrime landscape has been limited. Etay Maor, senior director of security strategy at Cato Networks, highlighted a number of factors for the initial reluctance of cybercriminals to adopt massively generative AI tools.

One of these is practical issues in code created by LLM tools like ChatGPT. This includes hallucinations – outputs that are factually incorrect or unrelated to the given context and the inability of some LLMs to properly understand questions in specific languages, such as Russian.

In fact, it is a bad idea to use these technologies to create malware because AI chatbots are trained on past data, code that already exists, according to Borja Rodriguez, manager of threat intelligence operations at Outpost24.

“The most infectious malware are the ones that are developed with innovative ideas in the way they can infect machines or inject processes,” Rodriguez said.

While on their radar, the use of generative AI tech will not be a priority for cybercriminal gangs at the moment, David Atkinson, founder and CEO of SenseOn, argued. He noted that tools bypassing MFA are more valuable than anything ChatGPT can produce.

10. Deepfakes: The Looming Disinformation Threat

In 2024, there are set to be 40 national elections occurring worldwide, making it the biggest election year in history.

This could be a boon for disinformation-spreaders, who will surely utilize deepfake tools in their manipulation campaigns.

According to ISACA, politicians are not the only ones worried about AI-powered disinformation. In its October 2023 Generative AI Survey, 77% of digital trust professionals said the top risk posed by generative AI today is misinformation and disinformation.

Chris Dimitriadis, Global Chief Strategy Officer at ISACA, said during the association’s Digital Trust Summit in Dublin, Ireland: “Pictures are worth a thousand words, and we’re not trained to question what we see. We’re only trained to question what we hear or read so this is a new advent for the human race, to question what we see as being legitimate or not.”

What’s hot on Infosecurity Magazine?