RSAC: Experts Highlight Novel Cyber Threats and Tactics

Written by

As cybercriminals and threat actors increase their tooling and capabilities, new sophisticated attack techniques are emerging and it is vital that defenders stay abreast of this evolution.

Daniel Blackford, senior manager, threat research at Proofpoint, explained: “A lot of money is following into the hands of bad actors, they’re being very successful. That has allowed them to up their remit.”

This has resulted in the expansion of the cybercrime ecosystem, with the cybercrime-as-a-service ecosystem becoming more professionalized, he added.

Three key novel cyber threat themes emerged during the RSA Conference: the growing expansion of the attack surface, identity-based attacks and novel uses of AI.

The Expanding Attack Surface

While much of the focus of security teams tends to be on organizations’ endpoints, there are frequent reminders that the entire tech infrastructure can be targeted.

Mike Aiello, chief technology officer at Secureworks, highlighted a nation-state attack recently observed across the vendor’s customer base, in which a router was compromised.

Aiello told Infosecurity: “Your attack surface is more than your endpoint, your attack surface is your entire infrastructure, so it’s important to update and manage your network infrastructure as well.”

Johannes Ullrich, dean of research at SANA Technology Institute College, believes we have reached an “inflection point” in organizations’ technical stacks, as there are vast numbers of systems and software being used that are no longer supported.

“A lot of these software stacks are showing their cracks,” he noted.

Many organizations use software written in multiple languages, some of which are old which means it is hard to find developers who understand them, he added.

This reality is providing significant opportunities for threat actors, and organizations must start the process of incrementally replacing such systems.

AI Challenges Identity Verification and Online Safety

Issues around identity have come to the fore, particularly amid the growing availability of generative AI tools.

Ullrich said these tools have substantially reduced the costs of developing accurate deepfake voice and audio, which is causing problems in verifying identity online.

These are increasingly able to overcome long established identity verification tools like CAPTCHA.

Ullrich believes the solution lies in risk-based identity, preventing checks from being too intrusive. This will involve using AI to identify and flag unusual behavior for review.

The growing scourge of sextortion was also highlighted at the event, which Heather Mahalik Barnhart, Digital Forensics and Incident Response (DFIR) curriculum lead at SANS Institute, described as “out of control.”

Sextortion is the practice of malicious actors coercing victims into sending nude photos of themselves and getting the victims to pay money in exchange for a promise not to make the photos public.

It is a multi-faceted scam, involving online grooming and often using deepfake technology.

Barnhart noted that even when intimate fake, AI-generated images are posted online of someone, the damage is still done as it will still be seen by a lot of people, which can’t be undone.

A study by ESET in August 2023 found that sextortion scams rose by 178% in the first half of 2023 compared to the first six months of 2022.

“A photo can ruin your life, whether it’s real or not,” said Barnhart. A large portion of victims are school children.

The key to preventing sextortion attacks is awareness and training – emphasizing basic messages like don’t talk to strangers online. This needs to be understood by parents and their children.

AI-Generated Malware

The utilization of generative AI to create malware has been debated by cybersecurity professionals, but it now appears that sophisticated threat groups are leveraging this tactic in campaigns.

A cyber campaign perpetrated by the group TA547 and  observed by Proofpoint in April 2024 involved a PowerShell script to deliver the final payload that was “very likely” to have been created by a by large language model (LLM), such as ChatGPT, according to Blackford.

One of the key indicators was that almost every line of code was meticulously documented.

“I’ve never met a developer that wants to do thorough documentation, and all in flawless grammar,” noted Blackford.

What’s hot on Infosecurity Magazine?