- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
UK Government Releases New AI Security Guidance
The UK’s leading security agency has released new guidance designed to help developers and others root out and fix vulnerabilities in machine learning (ML) systems.
GCHQ’s National Cyber Security Centre (NCSC) put together its Principles for the security of machine learning for any organization looking to mitigate possible adversarial machine learning (AML).
AML attacks exploit the unique characteristics of ML or AI systems to achieve various goals. AML has become a more pressing concern as the technology finds its way into an increasingly critical range of systems, underpinning healthcare, finance, national security and more.
“At its foundation, software security relies on understanding how a component or system works. This allows a system owner to test for and assess vulnerabilities, which can then be mitigated or accepted,” explained NCSC data science research lead, Kate S.
“Unfortunately, it’s hard to do this with ML. ML is used precisely because it enables a system to learn for itself how to derive information from data, with minimal supervision from a human developer. Since a model’s internal logic relies on data, its behavior can be difficult to interpret, and it’s often challenging (or even impossible) to fully understand why it’s doing what it’s doing.”
This is why ML components have historically not had the same level of scrutiny as regular systems, and why vulnerabilities can be missed, she added.
The new principles will help any entity “involved in the development, deployment or decommissioning of a system containing ML.” They aim to address several key weaknesses in ML systems, including:
- Reliance on data: manipulating training data could result in unintended behavior, which adversaries can then exploit
- Opaque model logic: developers may not be able to fully understand or explain a model’s logic, which can impair their ability to mitigate risk
- Challenges verifying models: it can be almost impossible to verify that a model will behave as expected under the whole range of inputs to which it might be subject, given that there could be billions of these
- Reverse engineering: models and training data could be reconstructed by threat actors to help them craft attacks
- Need for retraining: many ML systems use “continual learning” to enhance performance over time, but this means security must be reassessed each time a new model version is produced. This could be several times per day
“In the NCSC, we recognize the massive benefits that good data science and ML can bring to society, not least in cybersecurity itself,” Kate S concluded. “We want to make sure those benefits are realized, safely and securely.”