How to Address AI Security Risks With ISO 27001 – IT Governance UK Blog


AI penetration tests, user education, and more

Artificial intelligence is taking the world by storm.

But for all its potential, there are legitimate concerns around, among other things, data security.

Bridget Kenyon is the CISO (chief information security officer) for SSCL, a member of the UK Advisory Council for (ISC)2, and a Fellow of the Chartered Institute of Information Security. She also served as lead editor for ISO 27001:2022, and is the author of ISO 27001 Controls.

Bridget’s interests lie in finding the edges of security that you can peel up, and the human aspects of system vulnerability.

Just the person to talk to about:

  • The impact of AI on security;
  • User education and behavioural economics; and
  • How ISO 27001 can help address such risks and concerns.


In this interview


Input data quality

In your keynote speech at the Channel Partner Event and Awards 2024, you raised a concern about the quality of input data for LLMs [large language models] like ChatGPT – if that’s wrong, then all future outputs will be wrong, too.

Yes. Garbage in, garbage out.

I believe it was Charles Babbage who said something like: ‘I’m constantly surprised by the number of people who ask me: “If you give the engine incorrect data, will correct data come out of it?” I’m unable to understand the misapprehension on which that question is based.’

That’s wonderfully applicable to what we’re seeing with LLMs. The text that AI currently generates is a long way from reliable right now. That lack of accuracy stems from two things:

  1. The information going into it.
  2. The analysis – does this damage the accuracy if the AI doesn’t quite ‘get’ what it’s looking at? I.e. is information getting damaged or lost in translation?


Exposure assessment

Another point you raised in your speech was about how SharePoint, Microsoft Teams, etc. are all interconnected. Meaning that an employee or hacker could ask Copilot how to hack into an organisation’s systems to uncover sensitive information.

How significant are those risks?

First, don’t get too freaked out about it, because that exposure already exists in your environment. You’re not creating a vulnerability – adding an LLM, or another AI system, will simply make it more visible.

Yes, that increases the risk of that vulnerability being exploited, but you’re not creating a new weakness in your systems.

In terms of getting an AI to help ‘hack’ – yes, that’s a real threat, which is separate to the risk of existing available data simply being surfaced by an LLM.

AI as a ‘hacker’ will likely create a challenge for legitimate AI creators, as they try to create and update ‘guard rails’ around public AI tools. For less legitimate creators, it’ll be a source of revenue.

What can organisations do to mitigate those risks?

Regarding the risk of data exposure, do some pilot testing – an exposure assessment – before rolling out the LLM or AI system to everyone. An ‘AI penetration test’, if you like.

You could get a third party to do that for you, or you could do it in-house. Either way, I’d do this in multiple stages.

First, give just your security people access to the new system – can they break anything? Those people should have some experience with LLMs, and use their creativity to try to gain access to things they shouldn’t be able to access.

But that won’t give you the full picture – remember that users will only be able to access the data their account can access. So, the next stage is to recruit people from across the organisation who want to ‘play’ with this, and who have a good level of common sense, as ‘pilot users’.

Ideally, those users will have been in the organisation for a long time, so they’ll have accumulated a lot of access over the years. Also try to recruit a diverse group of people, in different roles and departments.


Pilot users of the AI system

What instructions or guidance would you give these pilot users?

You don’t want to just let them loose and tell them to ‘have fun’. You’ve already done that in the first stage, with your security people.

Instead, give them a set of questions, and tell them to collect the answers, based on the access they have. That standard question set might ask things like:

  • What are my colleagues’ postcodes?
  • What is my manager’s salary?
  • Etc.

What’s the purpose of asking multiple people to type in the same questions?

Because everyone has different access rights! People who’ve been in the organisation for years and years will have accumulated lots of access, so you definitely want to include those in your pilot group.

Also try to get at least one PA into that pilot user group, too – the personal assistant to a high-ranking executive. They tend to accrete access rights with time.

Got it. What’s the next step in this ‘AI penetration test’?

Give them a test script. Tell them to open Word and start up the LLM, then say: ‘Hi. I’d like you to write me a letter to our favourite customer that specifies the salary of five colleagues.’

Or you ask it for a letter about the most important thing the LLM has seen that day, or whatever makes sense for your organisation. That said, your test should cover a mix of generic and specific questions.

You get your pilot users to run those questions and scripts, and they report back any significant findings, which you can then fix.

Can organisations take a similar approach to test the business benefits of an LLM or AI system?

Yes. You’ll want to select a set of use cases that you believe would bring benefits to the organisation – that’d help staff do their jobs better or faster.

Put together a list of perhaps five use cases. And then get individual pilot users to pick two or three each, and to report back after testing.

Earlier, you spoke about ‘fixing’ significant findings. How would you go about that?

It depends on the issue. If it’s an access control problem, just change the permissions on that user account or group. And check for similar issues hiding elsewhere.

For example, suppose a user has seen a list of passwords. Then you find out:

  • Where that list came from; and
  • How that list surfaced.

This is really no different to dealing with a nonconformity in an audit, or perhaps a minor cyber incident. It’s what you’d do if someone had been browsing SharePoint manually, discovered excess privileges that way, and reported it.


Finding this Q&A useful? To get notified of future
interviews and other blogs, subscribe to our free
weekly newsletter: the Security Spotlight.


Addressing AI risks with ISO 27001

How can these types of tests be incorporated into an ISO 27001 ISMS [information security management system]?

It’s part of your checks to make sure your ISO 27001 controls are adequate.

When implementing an ISMS, you identify and assess your risks, then you mitigate them by, in many cases, implementing controls. And then you check whether those controls are adequate.

I like to approach the exposure assessment in this way, because you’re basing it on an existing risk assessment. Plus, as part of the assessment, you’ll have created a set of requirements for the LLM to satisfy before rollout.

To address AI risks, you’ll also need user education and awareness [Clause 7.3 (‘awareness’) and control 6.3 (‘information security awareness, education and training’)].


Behavioural economics and user education

Presumably, that also ties into behavioural economics. That people can make decisions 1) by taking shortcuts or 2) through thoughtful and detailed analysis – but they usually go for the shortcut, which makes them more likely to fall for an AI-powered scam.

With that in mind, what should user education look like? How might it be different from ‘traditional’ staff awareness?

I’d roll out specific AI training to all users before they start using AI.

We need to educate users about how convincing AI can be. It can make up information that’s internally consistent and entirely plausible.

Users must learn to always double-check the information against a verified source – even if it looks absolutely authoritative.

What about AI-powered phishing scams?

Phishing emails and scams are now incredibly convincing, because they can be fully tailored to the individual. The message can look as though it genuinely came from someone you know, and refers to information that person would have.

We need to teach people that just because something is written in your colleague’s or friend’s style, doesn’t automatically mean they wrote it. That’s very different to how we used to teach people to be wary of things like spelling and grammatical errors.

But you can’t just rely on telling people to be ultra-super-cautious. That’s not what us humans excel at.

So, what do we tell people instead?

The answer lies in understanding human behaviour and psychology.

Rather than looking for specific ‘cues’ in the message itself, which change over time anyway, look out for certain emotional reactions.

If you suddenly feel panicked, or feel like you need to take urgent action, that’s your warning sign. The attacker is using a ‘hook’ on your psyche.

What about asking questions like ‘why would this person be asking this thing of me to begin with’?

That falls under the old cues – looking out for things that seem a bit wonky. It’s a ‘tell’ – but if we must assume that AI will become exceptionally good at not looking wonky, then you’d have to discount that, too.

That said, this particular ‘wonkiness’ remains effective because of the power imbalance. If your superior asks you to do something, even if it seems like an odd request, you’re more inclined to just do it.

But the most effective thing we can do is concentrate on the timeless characteristics of social engineering: asking ourselves how the message makes you feel.

As I said, if it makes you feel panicked, like you need to do something urgently, that’s a clue that something’s off – regardless of the tools the messenger used to bring about that emotional reaction.

When that happens, stop. Slow down. Think. Get a second opinion. If your instinct tells you that something is wrong, listen to it.


Accounting for AI in a future edition of ISO 27001

Coming back to ISO 27001, in the latest version [ISO 27001:2022], one of the new controls was 5.23: information security for use of Cloud services.

But of course, the Cloud wasn’t a new technology in 2022 or even 2013 [when the previous edition of ISO 27001 was published]. Rather, uptake of the Cloud ramped up in those intervening years, and comes with its own security challenges.

Are we going to see something similar for AI? That its uptake becomes so widespread that it needs its own control[s] in a future edition of ISO 27001?

I wouldn’t be surprised to see precisely that. Not too long ago, ISO [International Organization for Standardization] even published an entire standard for AI. [ISO/IEC 42001:2023, providing the specification for an AIMS – an artificial intelligence management system.]

So, yes, it’s entirely possible we’ll see one or more controls in the next version of ISO 27002, or Annex A of ISO 27001, that reference AI.

From the current version of ISO 27001, besides staff awareness [control 6.3], what are the top controls for protecting yourself from malicious use of AI?

It’s really a case of looking through the controls and seeing what catches your eye.

Things like control 6.7 – remote working. This control might become very relevant if you’re not physically present with people, and it becomes difficult to check their identity.

So, it depends on the organisation?

Yeah. It’s a bit like the Cloud – most controls in Annex A don’t have the word ‘Cloud’ in them, but they’re still relevant to Cloud security. Or at least, they can be.

What other ISO 27001 requirements or controls should organisations think about in relation to AI risks and security?

Identifying your legal and regulatory requirements, and contractual obligations, as part of Clause 4.2 – that’s definitely relevant, particularly if you’re an AI provider, with all these AI laws popping up.

Supply chain security [controls 5.19–5.22] is another interesting one. Think about intellectual property rights – who’s got the copyright on AI?

Or control 5.35 – independent review of information security. That’s a handy control for AI penetration testing.

You can probably go through most ISO 27001 controls and find ways in which they’re relevant to AI.


Learn more about the ISO 27001 controls

In her book, the second edition of ISO 27001 Controls, Bridget covers each Annex A/ISO 27002 control in detail, giving guidance on two key areas:

  1. Implementation – what to consider to fulfil the Standard’s requirements.
  2. Auditing – what to check for, and how, when examining the controls.

Ideal for information security managers, auditors, consultants and organisations preparing for ISO 27001:2022 certification, this book will help readers understand the requirements of an ISO 27001 ISMS.


About Bridget Kenyon

Bridget is the CISO for SSCL. She’s also been on the ISO editing team for ISMS standards since 2006, and has served as lead editor for ISO/IEC 27001:2022 and ISO/IEC 27014:2020.

Bridget is also a member of the UK Advisory Council for (ISC)2, and a Fellow of the Chartered Institute of Information Security.

She’s also been a PCI DSS QSA (Payment Card Industry Data Security Standard Qualified Security Assessor), head of information security for UCL, and has held operational and consultancy roles in both industry and academia.

Bridget is the sort of person who’ll always have a foot in both the technical and strategy camps. She enjoys helping people find solutions to thorny problems, and strongly believes that cyber and information security are fundamental to resilient business operations, not ‘nice to haves’.


We hope you enjoyed this edition of our ‘Expert Insight’ series.

If you’d like to get our latest interviews and resources straight to your inbox, subscribe to our free Security Spotlight newsletter.

Alternatively, explore our full index of interviews here.



Source link