- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- OpenAI updates GPT-4o, reclaiming its crown for best AI model
- Nile unwraps NaaS security features for enterprise customers
UK and US to Build Common Approach on AI Safety
The UK and US will work together to develop tests for the most advanced AI models.
On April 1, 2024, the UK’s Technology Secretary Michelle Donelan and the US Commerce Secretary Gina Raimondo signed a Memorandum of Understanding (MOU) committing to the safety of AI models.
The new partnership will see the US and the UK align their scientific approaches and work closely to accelerate and rapidly iterate robust evaluation suites for AI models, systems, and agents.
The UK and US AI Safety Institutes, both inaugurated during the AI Safety Summit in November 2023, have already laid out plans to build a common approach to AI safety testing and share their capabilities.
The collaboration will start with both institutes performing at least one joint testing exercise on a publicly accessible model.
They also intend to tap into a collective pool of expertise by exploring personnel exchanges between the institutes, sharing vital information about the capabilities and risks associated with AI models and systems, and conducting fundamental technical research on AI safety and security.
In a public statement, Donelan said this partnership “will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly.”
“We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives,” she said.
Raimondo added that the collaboration will accelerate the work of both institutes.
“Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”
Read more: UK AI Safety Institute – A Blueprint for the Future of AI?