- This tiny USB-C accessory has a game-changing magnetic feature (and it's 30% off)
- Schneider Electric ousts CEO over strategic differences
- Pakistani Hackers Targeted High-Profile Indian Entities
- Election day is here! You can get a 50% off Lyft to the polls - here's how
- The 2-in-1 laptop I recommend most is not a Dell or Lenovo (and it's $200 off)
US and UK sign agreement to test the safety of AI models
The US has also taken steps to regulate AI systems and related LLMs. In November last year, the Biden administration issued a long-awaited executive order that hammered out clear rules and oversight measures to ensure that AI is kept in check while also providing paths for it to grow.
Earlier this year, the US government created an AI safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development.
The advisory group named the US AI Safety Institute Consortium (AISIC), which is part of the National Institute of Standards and Technology, was tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.
Several major technology firms, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, joined the consortium to ensure the safe development of AI.
Similarly, in the UK, firms such as OpenAI, Meta, and Microsoft have signed voluntary agreements to open up their latest generative AI models for review by the country’s AISI, which was set up at the UK AI Safety Summit.
The EU has also made strides in the regulation of AI systems. Last month, the European Parliament signed the world’s first comprehensive law to govern AI. According to the final text, the regulation aims to promote the “uptake of human-centric and trustworthy AI, while ensuring a high level of protection for health, safety, fundamental rights, and environmental protection against harmful effects of artificial intelligence systems.”