Security News > 2024 > April > U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models

U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models
2024-04-05 20:13

Figure A. Both countries will now "Align their scientific approaches" and work together to "Accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents." This action is being taken to uphold the commitments established at the first global AI Safety Summit last November, where governments from around the world accepted their role in safety testing the next generation of AI models.

The MoU primarily relates to moving forward on plans made by the AI Safety Institutes in the U.K. and U.S. The U.K.'s research facility was launched at the AI Safety Summit with the three primary goals of evaluating existing AI systems, performing foundational AI safety research and sharing information with other national and international actors.

Firms including OpenAI, Meta and Microsoft have agreed for their latest generative AI models to be independently reviewed by the U.K. AISI. Similarly, the U.S. AISI, formally established by NIST in February 2024, was created to work on the priority actions outlined in the AI Executive Order issued in October 2023; these actions include developing standards for the safety and security of AI systems.

These major tech companies are mostly in charge of regulating themselves, and last year launched the Frontier Model Forum to establish their own "Guardrails" to mitigate the risk of AI. What do AI and legal experts think of the safety testing? AI regulation should be a priority.

As the capabilities of AI models are constantly changing and expanding, safety tests performed by the two institutes will need to do the same.

In contrast, there is an incredible need for research on how to effectively test, mitigate risk and ensure safety of AI models.


News URL

https://www.techrepublic.com/article/uk-us-agreement-ai-safety-testing/