UK Government in £8.5m Bid to Tackle AI Cyber-Threats

Written by

The UK has promised £8.5m ($10.8m) to fund new AI safety research designed to tackle cyber-threats including deepfakes.

Announced by technology secretary Michelle Donelan at the AI Seoul Summit today, the research grants will focus on “systemic AI safety” – that is, understanding how better to protect society from AI risks and harness the technology’s benefits.

The research program will be led by researcher Shahar Avin at the government’s AI Safety Institute and delivered in partnership with UK Research and Innovation and The Alan Turing Institute. Although applicants will need to be based in the UK, they will be encouraged to collaborate with other researchers in AI safety institutes around the world.

AI represents a two-pronged threat to economic and social stability. On the one hand, AI systems could be targeted by techniques such as prompt injection and data poisoning, and on the other, the technology can be used by threat actors themselves to gain an advantage.

The UK’s National Cyber Security Centre (NCSC) warned in January that malicious AI use will “almost certainly” lead to an increase in the volume and impact of cyber-attacks over the next two years, particularly ransomware.

Read more on AI threats: UK Financial Regulator Urges Banks to Tackle AI-Based Fraud

In fact, new research from compliance specialist ISMS.online released this week revealed that 30% of information security professionals experienced a deepfake-related incident in the past 12 months, the second most popular answer after “malware infection.”

At the same time, three-quarters (76%) of respondents claimed that AI technology is improving information security and 64% said they are increasing their budgets accordingly over the coming year.

AI Safety Institute research director, Christopher Summerfield, claimed the new funding represents a “major step” to ensuring AI is deployed safely in society.

“We need to think carefully about how to adapt our infrastructure and systems for a new world in which AI is embedded in everything we do,” he added. “This program is designed to generate a huge body of ideas for how to tackle this problem, and to help make sure great ideas can be put into practice.”

The institute has already been conducting valuable research into AI threats. A May update published on Monday revealed that four of the most used generative AI chatbots are vulnerable to basic jailbreak attempts.

The UK and South Korea yesterday hailed a “historic first” as 16 major AI companies signed new commitments to safely develop AI models.

What’s hot on Infosecurity Magazine?