- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
- One of the most dependable robot vacuums I've tested isn't a Roborock or Roomba
- Sustainability: Real progress but also thorny challenges ahead
- The 45+ best Black Friday PlayStation 5 deals 2024: Early sales available now
UK Publishes First Guidelines on Safe AI Development
The UK’s National Cyber Security Centre (NCSC) has published what it claims to be the world’s first globally agreed guidelines on safe and secure AI development.
The Guidelines for Secure AI System Development were drawn up by the NCSC with help from industry experts and 21 other international agencies and ministries, including the US Cybersecurity and Infrastructure Security Agency (CISA).
A total of 18 countries including all of the G7 have now endorsed and “co-sealed” the guidelines, which will help developers make informed decisions about cybersecurity as they produce new AI systems.
NCSC CEO, Lindy Cameron, argued that the pace of AI development means governments and agencies need to keep up.
“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout,” she added.
“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyberspace will help us all to safely and confidently realize this technology’s wonderful opportunities.”
Read more on AI safety: Biden Issues Executive Order on Safe, Secure AI
The guidelines are broken down into four sections:
- Secure design explains how to understand risks and threat modelling, as well as trade-offs to consider on system and model design
- Secure development features information on supply chain security, documentation, and asset and technical debt management
- Secure deployment is about protecting infrastructure and models from compromise, threat or loss, as well as how to develop incident management processes, and responsible release
- Secure operation and maintenance provides guidelines on actions to take once a system has been deployed, including logging and monitoring, update management and information sharing
Darktrace global head of threat analysis, Toby Lewis, argued that security is a pre-requisite for safe and trustworthy AI.
“I’m glad to see the guidelines emphasize the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task,” he added.
“Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realize the benefits of AI faster and for more people.”