- 5 ways ChatGPT can help you write essays
- AMD to cut 4% of workforce to prioritize AI chip expansion and rival Nvidia
- I'm a ChatGPT power user - and this is still my favorite productivity feature a month later
- Agentic AI: 6 promising use cases for business
- How to subscribe to ChatGPT Plus (and 7 reasons why you should)
Layering Defences to Safeguard Sensitive Data Within AI Systems – IT Governance UK Blog
Strategies for mitigating privacy and security risks
As artificial intelligence develops relentlessly, organisations face a thorny problem:
How can you harness the transformative power of AI tools and systems while ensuring the privacy and security of your sensitive data?
We put the question to our head of AI product marketing, Camden Woollven.
In this interview
What security or privacy challenges do organisations face when using AI tools?
The risk of inadvertently exposing sensitive data is a big one.
Most generative AI systems are basically a massive ‘sponge’. The language models are trained by soaking up huge quantities of publicly available information.
So, in the workplace, someone feeding the AI confidential information such as:
- Strategy documents;
- Client information; or
- Financial records…
…means data could end up in the AI provider’s training pipeline and subsequently leak. We may see AI-powered CRM [customer relationship management] systems spilling customer data, HR platforms exposing employee records, and so on.
What other security risks might AI systems present?
Hackers might also target the AI systems directly.
Theoretically, if an attacker gained access to the LLM [large language model] that powers the AI tool, they could:
- Syphon off sensitive data;
- Plant false or misleading outputs; or
- Use the AI as a Trojan horse to spread malware.
Do you believe these risks will heighten in the future?
That stands to reason:
- AI systems are only going to become more sophisticated and omnipresent in the workplace.
- We’re already expanding from just text-based data to AIs that can analyse and generate images, audio, video, and so on.
Organisations must protect all their information assets, not just text-based ones. Plus, as AI gets baked into more and more enterprise software, attack surfaces will continue to grow, too.
Finding this interview useful? To get notified of future
Q&As and other free resources like this, subscribe to
our free weekly newsletter: the Security Spotlight.
How can organisations ensure employees use these tools safely?
The most important thing is to put clear guidelines in place around what data can and can’t be shared with AI systems. For example, any type of sensitive data – including personal data – should be off-limits.
You also need to train staff on your AI policy and guidelines. Make sure they understand the risks around AI, and their responsibilities. And get staff into the mindset of treating AI like any other third-party service: if you wouldn’t want it publicly broadcasted, don’t share it.
But don’t limit yourself to just people and processes – support them with technical measures.
The more you layer your defences, the more protected you’ll be. Technical safeguards – like enterprise-grade AI solutions that offer data encryption – access control and auditing features mitigate the impact of any human error.
What changes would you like to see to make the everyday use of AI safer?
In an ideal world, AI systems and tools should be designed with privacy and security in mind.
‘Privacy by design’ is already a GDPR [General Data Protection Regulation] requirement – seeing this principle as standard practice for AI tools would be fantastic. But, as things stand, much of the burden is on organisations and people to deploy AI safely.
The notion of ‘federated learning’ can really help, too. This would see AI providers move away from the centralised model of ‘hoovering up’ everyone’s data into giant, all-purpose language models, and concentrate on tools you can train and run locally, for individual organisations.
It’d ensure sensitive data never leaves the organisation’s control.
What can AI vendors themselves do to make AI more secure and privacy-minded?
AI vendors need to be transparent about how data is collected, used and secured.
At the moment, it feels a bit like a ‘black box’ – data goes in, magic comes out, but what happens in between is a mystery. Organisations should be able to easily audit what data an AI has ingested, flag anything inappropriate, and request deletion if needed. Think GDPR-style data subject rights, but for AI!
Another area in which a GDPR mindset can help is data anonymisation. Once anonymised, the data can’t identify specific people [data subjects], so doesn’t fall under the GDPR’s scope. That’s not some legal loophole, but simple logic – if someone can’t reasonably use the data to identify or re-identify a specific individual, the risk presented to the person that data belongs to is significantly reduced.
Think of it as another ‘layer’ among your safeguards – the last line of defence, in this case.
Secure your organisation’s future
Navigate the complex landscape of AI with ease, ensuring your organisation remains compliant and your data stays private.
Our Artificial Intelligence Staff Awareness E-learning Course enables your team to use AI tools with confidence and integrity. Get insights into AI’s transformative impact across industries, and learn strategies to manage AI risks and promote its ethical use.
About Camden Woollven
As our head of AI product marketing and our AI subject-matter expert, Camden leads the development of our AI compliance products.
Her goal is to lead a cultural shift towards the adoption of AI technologies. She also aims to promote a mindset of continual learning and innovation as we develop AI-related competencies and capabilities.
Camden also regularly features in articles in the media relating to AI.
We hope you enjoyed this edition of our ‘Expert Insight’ series. We’ll be back soon, chatting to another expert within GRC International Group.
If you’d like to get our latest interviews and resources straight to your inbox, subscribe to our free Security Spotlight newsletter.
Alternatively, explore our full index of interviews here.