AI Safety Summit: OWASP Urges Governments to Agree on AI Security Standards

Written by

Top-level discussions on security and ethical risks AI-powered tools pose are no longer enough to mitigate the dangers posed by the rapid adoption of artificial intelligence (AI), according to the Open Worldwide Application Security Project (OWASP).

Ahead of the AI Safety Summit, held in Bletchley Park, England, on November 1-2, the non-profit released a call to action urging Summit attendees to rapidly pledge to agree on – and adopt – actionable AI security standards.

The OWASP Foundation said in an open letter that it “wholeheartedly agrees with Lindy Cameron, the CEO of the UK’s National Cyber Security Centre (NCSC), for the urgent need to stay ahead of risks and the vital role of global industry security standards for AI. This will require alignment to avoid a Babel of standards.”

Speaking to Infosecurity, John Sotiropoulos, one of the core contributors to the OWASP Top 10 for LLM, explained the foundation’s positions: “We recognize the need to have top-level discussions, to explore the risks with policymakers and large organizations. However, we think it’s already time for action.”

“While we’re planning ahead, AI is already being used – and misused. We need to foster an immediate response. For instance, how can we report vulnerabilities found in AI applications since they don’t always fit with today's systems? Do we need to create a new framework or just update existing ones? We cannot do everything at once, but we cannot do everything sequentially, either. It’s a balancing act,” he added.

Read more: AI Safety Summit Faces Criticisms for Narrow Focus

Which AI Security Standards Does OWASP Promote?

In its open letter, the OWASP Foundation also asks governments to turn to open standard organizations, such as OWASP and the Linux Foundation, to adopt practical AI security standards.

“The first that comes to mind is the OWASP Top 10 for LLM Applications, of course. But we’re also in talks with organizations like MITRE and the US National Institute for Innovation and Technology (NIST), who are on the same page,” Sotiropoulos said.

He added that OWASP is collaborating with MITRE on its ATLAS framework that aims to map the evolving threat landscape, including AI threats in the mix.

The OWASP AI Exchange initiative also cites ISO/IEC 27090, the OWASP Top 10 ML and the CEN/CENELEC standards, on which the EU AI Act will be based, as essential standards on which governments could rely to align their AI safety policy.

Example of an AI engineering framework encompassing security risks from Software Improvement Group (SIG). Source: OWASP AI Exchange
Example of an AI engineering framework encompassing security risks from Software Improvement Group (SIG). Source: OWASP AI Exchange

In its open letter, the OWASP Foundation asked Summit attendees to join the AI Exchange, which the non-profit aims to establish as the collaboration hub to work on AI security standard alignment.

Although governments attending the AI Safety Summit have not yet addressed the OWASP Foundation’s call, Sotiropoulos said they received great feedback from individuals in the cybersecurity community, including people from cybersecurity agencies.

He also praised US President Biden's Executive Order on Safe, Secure AI, published on October 30, which mentioned that NIST will be asked to develop new standards for extensive red-team testing to ensure safety before the public release of any AI system.

Read more: 28 Countries Sign Bletchley Declaration on Responsible Development of AI

Six Actionable Steps AI Stakeholders Can Take Now

On November 2, Sotiropoulos, who is also a senior security architect at Kainos, co-authored a blog post with Suzanne Brink, Kainos’ data ethics manager and David Haber, founder and CEO of Lakera on some actionable steps governments, policymakers and AI tool providers should take.

These are:

  1. Increase testing and assurance: Foundation model providers must adopt proactive measures such as red teaming and comprehensive testing before releasing their models to the public – both proprietary and open-source models.
  2. Adopt actionable open standards, such as the OWASP Top 10 LLM
  3. Accelerate standards alignment to prevent the risk of conflicting taxonomies, contradictory vulnerability reporting, and inconsistent defenses, which could confuse and undermine those at the frontline of AI development.
  4. Invest in automated defenses, like solutions offering API-driven automation to test and secure LLM applications rapidly
  5. Integrate security with ethics using a data ethics framework when assessing the security and safety of AI practices
  6. Promote secure-by-design and ethics-by-design AI delivery

What’s hot on Infosecurity Magazine?