- Track US election results on your iPhone, iPad or Apple Watch - here's how
- How to Become a Chief Information Officer: CIO Cheat Sheet
- 3 handy upgrades in MacOS 15.1 - especially if AI isn't your thing (like me)
- Your Android device is vulnerable to attack and Google's fix is imminent
- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
#InfosecurityEurope: Netskope Sets Out to Help Enterprises Safely Use ChatGPT
Cybersecurity experts are grappling with how to secure use of ChatGPT and other generative AI tools such as Google Bard and Jasper. Netskope’s new security solution enhancements launched at Infosecurity Europe aim to do just that.
Netskope, a Secure Access Service Edge (SASE) provider, has enhanced its Intelligent Security Service Edge (SSE) platform with a range of capabilities that enable employees to utilize the benefits of tools like ChatGPT, without running cybersecurity or data protection risks.
In recent analysis, the company found that ChatGPT adoption is growing at a current rate of 25% month over month, with approximately one in 100 enterprise employees actively using ChatGPT daily, and each user submitting eight prompts per day on average.
Risk Versus Opportunity
The growing use of generative AI has greatly increased data protection requirements for organizations, Neil Thacker, CISO EMEA at Netskope, told Infosecurity.
“On top of issues around data integrity that come from pre-trained models, there is an immediately pressing risk for data exfiltration when users share confidential information in any GenAI tool,” he explained. “Data included in requests to GenAI tools is placed in the hands of a third-party with potentially little to no contractual agreement for how it will be treated – and an inappropriate level of trust in the security posture of the tool’s data handling.”
Despite these issues, Thacker said that organizations are “incredibly keen to make use of the productivity benefits” of generative AI, finding that only around 10% of enterprises are actively blocking ChatGPT use by teams. However, “they need to ensure that they manage the risk and are not left retrospectively responding to potential data exposure.”
Netskope has developed a unified solution offering, designed for the secure and compliant use of generative AI. The Netskope Zero Trust Engine, which is part of Intelligent SSE, has several key features to enable this:
- Generative AI usage visibility: The tool allows instant access to specific ChatGPT usage and trends within the organization through a software as a service (SaaS) database and advanced analytics dashboard. Additionally, Netskope’s Cloud XD analytics discerns access levels and data flows through application accounts, such as corporate vs. personal accounts. There is also a new web category to identify generative AI domains, allowing teams to configure access control and real-time protection policies.
- Application access control: Organizations can perform granular access to ChatGPT and other generative AI applications via the solution. Additionally, it provides users with real-time training by displaying messages to alert them on potential data exposure and other risks every time generative AI applications are accessed.
- Data protection controls: The Zero Trust Engine is also designed to help prevent organizations falling foul of data protection regulations, such as the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA) and Health Insurance Portability and Accountability Act (HIPAA). This includes by monitoring and allowing or blocking posts and file uploads to AI chatbots.
Combining Tools with People and Processes
Thacker emphasized that organizations must combine tools with education and policies to maintain the secure use of ChatGPT and other generative AI tools. The Netskope platform gives real-time alerts to coach users.
Organizations must utilize technologies like Intelligent SSE to gain a full understanding of what data is being shared and where. Then, they need “to set policies to put appropriate guardrails around activity in line with the risk.”
On June 14, 2023, the EU passed the ‘AI Act’ into law, which is designed to strictly regulate AI services and mitigate the risk it poses. The final draft introduced new measures to control “foundational models.”
This comes in light of significant data privacy and ethical concerns around the use of data to develop generative AI tools like ChatGPT.
Image credit: Popel Arseniy / Shutterstock.com