The US and UK have reached a new level of artificial intelligence (AI) regulation and collaboration. 

A pioneering agreement, signed by UK Science Minister Michelle Donelan and US Commerce Secretary Gina Raimondo, establishes a formal partnership between the two nations focused on the safety testing and risk assessment of AI technologies.

“The next year is when we’ve really got to act quickly because the next generation of [AI] models are coming out, which could be complete game-changers, and we don’t know the full capabilities that they will offer yet,” Donelan says. 

The agreement is considered the first of its kind in the world, reflecting a proactive approach by both governments toward mitigating the existential risks associated with AI, including its potential use in cyberattacks and bioweapon creation. 

The collaboration involves sharing technical knowledge, information, and talent related to AI safety, facilitating an exchange of expertise through secondments of researchers between the two nations’ respective AI safety institutes.

In particular, the partnership will focus on the independent evaluation of private AI models created by major companies like OpenAI and Google. 

This effort is geared towards enhancing the understanding and management of AI's potential risks, thereby supporting safer AI development and application across various sectors.

“This partnership is going to accelerate both of our institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” Raimondo said.

“Our partnership makes clear that we aren’t running away from these concerns – we’re running at them.”

An official statement on partnership is accessible here. It comes just days after US President Joe Biden issued an executive order on new standards for “safe, secure, and trustworthy artificial intelligence”, requiring developers to share safety information.