UK Announces “World-First” AI Security Standard
The UK government has announced a new AI Code of Practice which it claims will form the basis of a global standard for securing the technology, through the European Telecommunications Standards Institute (ETSI).
Published on Friday as a voluntary code of practice, alongside implementation guidance, it was developed in close collaboration with the National Cyber Security Centre (NCSC) and various external stakeholders.
The code’s 13 principles cover the secure design, development, deployment, maintenance and end-of-life aspects of the AI lifecycle. They impact software vendors that develop AI, use third-party AI and offer it to customers, as well as regular organizations that create their own or use externally provided AI services and components.
The code will not apply to AI vendors that offer or sell models and components, but do not play a role in developing or deploying them. These entities will be covered by a separate Software Code of Practice and Cyber Governance Code, the government said.
The principles are as follows:
- Raise awareness of AI security threats and risks through staff training
- Design AI systems for security, functionality and performance
- Evaluate/model threats and manage risks related to use of AI
- Enable human responsibility for AI systems
- Identify, track and protect assets, including interdependencies/connectivity
- Secure infrastructure including APIs, models, data, and training and processing pipelines
- Secure the software supply chain
- Document data, models and prompts with a clear audit trail of system design and post-deployment maintenance plans
- Conduct appropriate testing and evaluation
- Deploy securely, including pre-deployment testing and information for end users on how their data will be used, accessed and stored, and how to securely use, manage, integrate and configure the AI
- Maintain regular security updates, patches and mitigations
- Monitor system behavior with system and user action logs to support security compliance, incident investigations and vulnerability remediation
- Ensure proper data and model disposal
NCSC CTO, Ollie Whitehouse, argued that it is critical the UK prioritizes security as it looks to harness the transformative power of AI in line with an ambitious AI Opportunities Action Plan announced by the government last month.
“The new Code of Practice, which we have produced in collaboration with global partners, will not only help enhance the resilience of AI systems against malicious attacks but foster an environment in which UK AI innovation can thrive,” he added.
“The UK is leading the way by establishing this security standard, fortifying our digital technologies, benefiting the global community and reinforcing our position as the safest place to live and work online.”
The move comes just a month after the government announced plans to criminalize the creation of sexually explicit deepfakes.