The People Hacker: AI a Game-Changer in Social Engineering Attacks

Written by

Cybercriminals are using artificial intelligence (AI) to launch more sophisticated social engineering attacks, and experts are warning that it is becoming increasingly difficult to distinguish between what is real and what is AI-generated.

This trend is being highlighted at the UK government's AI Safety Summit, which is focusing on the risks of AI and strategies to mitigate them.

Prominent ways generative AI tools are being used by malicious actors is to launch more realistic phishing emails and using deepfakes to impersonate the voice of senior business leaders to defraud companies out of vast sums.

These threats are on the radar of renowned social engineering expert Jenny Radcliffe, aka the People Hacker. During the recent ISC2 Security Congress, she told Infosecurity that AI will be “game-changer” in social engineering attacks.

“Unfortunately, its on the side of the criminals because its difficult to distinguish what’s real and what’s AI-generated. The technology is learning all the time, correcting any mistakes that we do spot, and I think normal people are going to really struggle to spot a scam or con that’s AI generated,” said Radcliffe.

A Human Answer to a Technical Problem

During her keynote address at the ISC2 Security Congress, Radcliffe argued that we must put our faith in humans to overcome AI-based threats. Speaking to Infosecurity, she said: “Unfortunately it’s a very technical problem that can only be solved by a human solution, which is knowing what to look for.”

"It’s a very technical problem that can only be solved by a human solution"

Radcliffe advocated a “four eyes for everything” approach in organizations, in which no financial decision can be authorized by a single person and must go through a second person. This second person should be aware they need to perform a social engineering check.

To prevent such an approach impacting productivity, technical solutions like watermarks will be crucial, she added.

Education will be a major component of combatting threats. Awareness programs will have to evolve over time as we gain more understand of AI and the gaps it is causing in organizations’ security.

Social Media Accounts Targeted

Throughout her career, Radcliffe has observed and adapted to an evolving landscape in social engineering. Another important trend is more targeted attacks involving individuals’ social media accounts – both through engaging them on these platforms and using the vast amount of personal data people give away about themselves and their family and friends.

The end goal is generally to infiltrate the company they work for – hooking people into a scam and layering them on to ultimately reach where they and their family work.

“We’re definitely seeing that chain of scams, probably because most companies have technology controls and education now. Attackers are starting the process outside of work and then working their way in,” explained Radcliffe.

Signs of Encouragement

Radcliffe said organizations are improving their ability to detect and protect against social engineering attacks. This is a result of most organizations adopting comprehensive cybersecurity awareness programs combined with an increased awareness of such attacks among the public.

“They can picture someone trying to break into an office, or a con person at the end of a phone – that resonates with people,” she noted.

Radcliffe added that she is being stopped and questioned more while conducting her work trying to physically break into companies. “I wouldn’t say it is necessarily stopping us succeeding, but we are being stopped more,” she said.

One problem that remains is how to report scams. “One of the big issues is where do you report it and how useful is it to report it. People don’t know the answer that, they don’t always feel like scams will be followed up,” outlined Radcliffe.

She noted that the issue of getting help and justice from being scammed remains a “grey area” in this space.

A new regulation from the UK’s Payments Systems Regulator (PSR), which will require banks to reimburse victims of Authorised Push Payment (APP) fraud, attempts to provide an answer to this issue. However, Radcliffe is concerned such an approach could lead to personal responsibility being taken away from the public in terms of avoiding such scams.

“Unfortunately, there will always be a victim somewhere of criminal activity, but you can’t automatically blame banks unless it was caused by a gap in their operation,” she added.

Despite an increasingly digitized society, Radcliffe’s message is that humans remain both the primary target for cyber-attacks while also being the main means of protecting against them.

What’s hot on Infosecurity Magazine?