UK’s AI Safety Summit Scheduled For Early November

Written by

The UK’s much-anticipated summit on AI safety will reportedly be held in November, with cybersecurity experts welcoming the government’s focus on regulating emerging technologies.

A Downing Street spokesperson confirmed the event will take place at the start of November at Bletchley Park, home to the codebreakers of World War Two who were led by the father of modern AI, Alan Turing.

There had been criticism of slipping timelines, following an initial announcement by the Prime Minister back in June. Aside from global government representatives and academics, tech giants including Google, OpenAI and Microsoft will be invited, according to the Financial Times.

Sridhar Iyengar, managing director for Zoho Europe, said he was “excited” by the prospect of the UK leading the way in AI regulation.

“To drive trust in the use of AI, government, industry experts and business need to collaborate to help develop the right rules, regulation and education which can accelerate adoption,” he added.

“Businesses can support this by taking care to implement ethical policies for staff to follow to ensure AI is used safely and risks are reduced. This collaboration can help a safe playing field to be developed, and can contribute to the UK taking a leading position in the development of AI.”

Read more on AI safety: Google Launches Framework to Secure Generative AI

Yi Ding, assistant professor of information systems at the Gillmore Centre for Financial Technology, said it was encouraging the UK was trying to take a lead on developing rules of the road for safer AI use.

“Business decision makers and tech leaders hope for clarity around how AI can be used safely,” she added.

Venafi VP of ecosystem and community, Kevin Bocek, argued that AI systems need a “kill switch” based around machine identities in the event they go rogue.

“Rather than having one super identity, there would be potentially thousands of machine identities associated with each model, from the inputs that train the model, to the model itself and its outputs,” he added.

“These identities could be used as a de facto kill switch – as taking them away is akin to removing a passport, it would become extremely difficult for that entity to operate.”

Bocek described the summit as a “welcome step forward” and said that if agreement could be reached on how to safeguard use of AI globally, it would be “groundbreaking achievement.”

However, the EU has arguably taken a lead on AI regulation with its landmark AI Act, currently being finalized by member states, parliamentarians and the European Commission.

What’s hot on Infosecurity Magazine?