- Nearly half of Gen AI adopters want it open source - here's why
- How to merge PDF files from the web, desktop, and command line
- A Fifth of UK Enterprises “Not Sure” If NIS2 Applies
- VisionLab lanza los primeros cristales con IA ‘made in Spain’
- I tried replacing Twitter with Bluesky, Threads, and Mastodon: Here's what I found
For security leaders, AI is a mounting peril and an emerging shield
The already heavy burden born by enterprise security leaders is being dramatically worsened by AI, machine learning, and generative AI (genAI). Malware, phishing, and ransomware are fast-growing threats given new potency and effectiveness with AI – for example, improving phishing attacks, creating convincing fake identities or impersonating real ones.
Easy access to online genAI platforms, such as ChatGPT, lets employees carelessly or inadvertently upload sensitive or confidential data. In the hands of adversaries, AI exploits two attack vectors:
- It makes a range of existing attacks – such as social engineering, phishing, deep fakes, and malware – faster and much more effective.
- It enables exploitation of enterprise AI applications and models during and after development, such as deploying poisoning attacks at the model training stage, or hijacking the model by feeding it incorrect information.
Organizations are reacting to the rise of AI in one of two ways:
- Encouraging widespread use, with little oversight or understanding of the risks.
- Banning nearly all use (except for a small group of specialists), effectively crippling its potential.
Information security leaders need an approach that is comprehensive, flexible and realistic.
Leveraging AI to counter AI
Security leaders are already grappling with how to use AI to defend against such attacks, to monitor employee usage of it, and protect the organization’s applications, including AI apps and models. Adopting still more, individual security tools, now with AI incorporated, is already happening.
But such an ad hoc approach carries trade-offs: integration is minimal or absent entirely; centralized management is nearly impossible; data sharing is difficult; and the workload on security staff is rising.
An alternative approach: AI-based security platforms
A better option is connected cybersecurity platforms, each with AI capabilities designed and trained to address a broad security function, such as code protection, employee usage, and security operations (SecOps). Where needed, these platforms can be augmented by specialized security tools targeting specific vulnerabilities.
Ideally, these AI-powered security platforms are designed to work in harmony with each other, enabling central management and shared data, and – through open APIs – with at least some existing security tools. This consolidation approach can be thought of as the “platformization” of enterprise cybersecurity in the AI era. It transcends the stale “best-of-breed versus one-size-fits-all” debate.
Enterprise security leaders can start by focusing on a few key priorities.
Defend against AI-driven attacks: Block sophisticated web-based threats, zero-day threats, evasive command-and-control attacks, and DNS hijacking attacks.
Secure employee AI usage: Classify and prioritize genAI apps to assess risk and detect anomalies; create and enforce very specific usage policies; and alert and coach employees on using AI safely.
Secure AI development: Monitor real-time AI traffic flows covering applications, models, user access and infrastructure threats; and enable anomaly detection to protect AI models from manipulation.
The bottom line
AI gives your adversaries unprecedented power. Leveraging AI in connected cybersecurity platforms gives you the same power to defend against AI-driven attacks, secure employee usage of AI, and protect your AI software supply chain.
Leverage a platform approach to fight AI-based cyber threats with AI-based capabilities. Learn more here.