AI Safety Summit: Biden-Harris Administration Launches US AI Safety Institute

Written by

The US Department of Commerce has created a new government body to lead the US government’s efforts on AI safety and trust, the US Artificial Intelligence Safety Institute (USAISI).

US Vice-President Kamala Harris made the announcement during her speech at the UK’s AI Safety Summit in Bletchley Park on November 1.

How will the US AI Safety Institute Work?

USAISI will be part of the US National Institute of Innovation and Technology (NIST). It will be tasked to facilitate the development of standards for the safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.

To achieve its mission, USAISI will leverage outside expertise, including working with partners in academia, industry, government, and civil society to advance AI safety.

It will work with similar institutes in ally and partner nations, including the UK’s AI Safety Institute, to align and coordinate work in this sphere.

Acting On Biden’s Executive Order on Safe, Secure AI

The creation of USAISI comes a few days after Biden's Executive Order on Safe, Secure AI, which mentioned that NIST will be asked to develop new standards for extensive red-team testing to ensure safety before the public release of any AI system.

The US Secretary of Commerce, Gina Raimondo, attended the AI Safety Summit alongside Harris. She commented: “Together, in coordination with federal agencies across government and in lockstep with our international partners and allies, we will work to fulfill the President’s vision to manage the risks and harness the benefits of AI.”

NIST director Laurie E. Locascio said her organization was “thrilled to take on this critical role for the United States that will expand on our wide-ranging efforts in AI.”

“USAISI will bring industry, civil society and government agencies together to work on managing the risks of AI systems and to build guidelines, tools, and test environments for AI safety and trust,” she added.

Read more: AI Safety Summit: OWASP Urges Governments to Agree on AI Security Standards

A New Set of US Initiatives to Advance AI Safety

The participation of a US delegation to the UK’s AI Safety Summit was also an opportunity for the Biden-Harris administration to announce several other initiatives that will shape the government’s approach to responsible AI.

These include:

  • Draft policy guidance on the US government’s use of AI, including a pledge to incorporate responsible practices in government development, procurement, and use of AI. This draft policy builds on existing documents, including the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework.
  • A political declaration on the responsible military use of AI and automation.
  • A new philanthropic initiative to advance AI in the public interest.
  • An effort to counter fraudsters who are using AI-generated voice models to target US citizens.
  • A call to support the development and implementation of international standards to enable the public to effectively identify and trace authentic government-produced digital content and AI-generated or manipulated content, including through digital signatures, watermarking, and other labeling techniques.

This latest announcement builds on the voluntary commitments of 15 leading AI companies to develop mechanisms that enable users to understand if audio or visual content is AI-generated.

What’s hot on Infosecurity Magazine?