- This is the best car diagnostic tool I've ever used, and it's only $54 with this Black Friday deal
- 5 things successful managers do to earn respect and build trust
- From Service to Security: My Path to Empowerment at Cisco
- Aggressive Chinese APT Group Targets Governments with New Backdoors
- How Cisco Uses the Isovalent Platform to Secure Cloud Workloads
Over a Third of Firms Struggling With Shadow AI
Over a third of organizations have admitted that they face major challenges monitoring the use of unsanctioned AI tools in the enterprise, according to Strategy Insights.
The London-headquartered consulting firm polled 3320 directors from companies across the US, UK, Germany, the Nordics and Benelux regions in order to better understand how they’re managing AI.
It found that non-approved tools are particularly challenging to monitor when integrated with legacy systems.
Shadow AI could present significant cyber and compliance risks if users accidentally share sensitive corporate information with large language models (LLMs). Samsung was forced to ban the use of generative AI (GenAI) internally after staff on separate occasions shared source code and meeting notes with ChatGPT.
A RiverSafe study from April claimed that a fifth of UK firms has had potentially sensitive corporate data exposed via employee use of GenAI.
Read more on AI: Forrester: GenAI Will Lead to Breaches and Fines in 2024
Even cybersecurity professionals are using AI tools that have not been approved by their own department. A Next DLP poll taken at RSA Conference and Infosecurity Europe events found that nearly three-quarters (73%) of IT security pros had used unsanctioned apps including AI in the previous 12 months.
Several respondents told Strategy Insights that they rely on “honey tokens” to track data leakage within AI systems.
Nearly half (48%) also agreed that employee training is needed to ensure staff handle AI responsibly and understand the associated risks, especially in highly regulated industries like healthcare and finance.
Two-thirds (67%) pointed to robust governance frameworks as key to mitigating shadow AI risks.
Strategy Insights’ Johan Oosthuizen said organizations need to build an effective monitoring framework that tracks how, where and why employees are using AI.
“A balanced approach, incorporating regular audits of employee devices and networks, can help organizations keep track of non-approved AI tools while respecting user privacy,” he added.
“Leaders at the roundtable recommended deploying network monitoring systems and establishing company-wide policies on acceptable AI tool usage to prevent unauthorized data sharing and potential security breaches.”