- Grab a Microsoft Office 2019 license for Mac or Windows for $28
- Buy or gift a Babbel subscription for 74% off right now
- Data Breaches are a Dime a Dozen: It’s Time for a New Cybersecurity Paradigm
- Join BJ's Wholesale Club for just $20 right now (reg. $55)
- Learn a new language with over 50% off a Rosetta Stone subscription right now
PII Input Sparks Cybersecurity Alarm in 55% of DLP Events
A substantial 55% of recent Data Loss Prevention (DLP) events have involved attempts to input personally identifiable information (PII), while 40% included confidential documents.
The figures come from Menlo Security’s report The Continued Impact of Generative AI on Security Posture, published earlier today.
According to the new data, from July to December 2023, the landscape of generative AI usage witnessed significant transformations, with the emergence of new platforms and features contributing to a diverse and specialized market.
“AI can already positively impact the cybersecurity field way beyond the simple automation of tasks,” commented Pathlock CEO, Piyush Pandey. “From intelligent response automation to behavioral analysis and prioritization of vulnerability remediation, AI is already adding value within the cybersecurity field.”
At the same time, the expansion of generative AI has ushered in new cybersecurity risks for enterprises. While organizations are increasingly aware of these risks and are ramping up efforts to mitigate data loss and leakage resulting from heightened generative AI usage, security policies primarily target individual applications rather than the broader spectrum of generative AI platforms.
Menlo’s latest report underscores this trend, revealing a 26% surge in security policies tailored for generative AI sites. However, most of these policies are still implemented at the application level, potentially leaving vulnerabilities. Notably, organizations adopting group-level security protocols demonstrate a stronger emphasis on security, with 92% having security-focused policies in place for generative AI usage, compared to only 79% for those with domain-level security measures.
“Over-reliance on AI could lead to skill atrophy in certain basic areas,” warned Craig Jones, VP of security operations at Ontinue. “For example, if AI tools are always responsible for identifying and categorizing threats, newer cybersecurity professionals might not develop a strong foundational understanding of these processes. Therefore, it’s crucial that training and education in cybersecurity evolve alongside AI developments.”
Further insights from the report reveal that while most traffic gravitates towards the main six generative AI sites, overall file uploads across the generative AI category have surged by 70%. This disparity highlights the inadequacy of relying solely on application-specific security policies.
Read more on AI-powered attacks: ChatGPT Cybercrime Surge Revealed in 3000 Dark Web Posts
Finally, while copy-and-paste attempts at generative AI sites have slightly declined (6%), they remain prevalent, underscoring the imperative to implement controls.