- ISACA Barcelona president warns of quantum illiteracy
- Uyghur Diaspora Group Targeted with Remote Surveillance Malware
- Defending Against HNDL Attacks Today
- Google pulls the plug on your old Nest - but you get nearly 50% off a new thermostat
- This $200 Motorola phone has no business being this good for the price
Reimagining Security for the AI Era

AI is one of the fastest growing technologies in history and it’s easy to see why. We all see its value in everyday life. It’s helping us write emails, summarize meetings, and even teach our kids math. And what we’re doing today is just a fraction of what we’ll be able to do just a few short years from now.
I believe AI will truly be a net positive for society and the economy. But as inspiring and exciting as AI is, it also presents us with the hardest challenge in the history of cybersecurity. Ironically, while security has been blamed for slowing technology adoption in the past, we believe that taking the right approach to safety and security today will actually accelerate AI adoption.
This week at RSA in San Francisco, I’m laying out the case for what makes AI such a unique security and safety challenge. And at Cisco, we’ve launched a range of innovations designed to help enterprises equip their incredibly overworked and understaffed cybersecurity teams with the AI tools they need to protect their companies in this AI era.
What’s so hard about securing AI anyway?
It all starts with the AI models themselves. Unlike traditional apps, AI applications have models (sometimes more than one) built into their stack. These models are inherently unpredictable and non-deterministic. In other words, for the first time, we’re securing systems that think, talk, and act autonomously in ways we can’t fully predict. That’s a game-changer for cybersecurity.
With AI, a security breach isn’t just about someone stealing private data or shutting down a system anymore. Now, it’s about the core intelligence driving your business being compromised. That means millions of ongoing decisions and actions could be manipulated in an instant. And as enterprises use AI across mission-critical parts of their organizations, the stakes are only going to get bigger.
How do we keep ourselves secure in the AI world?
At Cisco, we are focused on helping understaffed and overworked security operations and IT leaders tackle this new class of AI-related risks. Earlier this year, we launched AI Defense, the first solution of its kind. It gives security teams a common substrate across their enterprise helping them see everywhere AI is being used; it continuously validates that the AI models aren’t compromised; and it enforces safety and security guardrails along the way.
We also recently announced a partnership with NVIDIA to deliver Secure AI Factories that combine NVIDIA’s AI computing power with our networking technology to secure AI systems at every layer of the stack. And today we introduced a new partnership with ServiceNow. They are integrating AI Defense into their platform to centralize AI risk management and governance, making it easier for customers to gain visibility, reduce vulnerabilities, and track compliance. This ensures that organizations have a single source of truth for managing AI risks and compliance.
In other developments at RSA this week we’re also continuing to deliver with:
- New agentic AI capabilities within Cisco XDR: multi-model, multi-agent rapid threat detection and response.
- Enhancements to Splunk Enterprise Security: Splunk SOAR 6.4 is GA, and Splunk ES 8.1 that will be GA in June
- AI Supply Chain Risk Management: New capabilities for identifying and blocking malicious AI models before they enter the enterprise.
You can read more about all of these innovations here
Finally, we also introduced Foundation AI, a new team of top AI and security experts focused on accelerating innovation in for cyber security teams. This announcement includes the release of the industry’s first open weight reasoning model built specifically for security. The security community needed an AI model break through and we are thrilled to open up this new area of innovation.
The Foundation AI Security model is an 8-billion parameter, open-weight LLM that’s designed from the ground up for cybersecurity. The model was pre-trained on carefully curated data sets that capture the language, logic, and real-world knowledge and workflows that security professionals work with every day. The model is:
- Built for security — 5 billion tokens distilled from 900 billion;
- Easily customizable — 8B parameters pre-trained on a Llama model; and anyone can download and train;
- Highly-efficient — It’s a reasoning model that can run on 1-2 A100s vs 32+ H100s;
We are releasing this model and the associated tooling as open source in a first step towards building what we are calling Super Intelligent Security.
As we work with the community, we will be developing fine-tuned versions of this model and create autonomous agents that will work alongside humans on complex security tasks and analysis. The goal is to make security operate at machine scale and keep us well ahead of the bad actors.
You can read more about Foundation AI and its mission here.
Security is a team sport
We decided to open source the Foundation AI Security model because, in cybersecurity, the real enemy is the adversary trying to exploit our systems. I believe AI is the hardest security challenge in history. Without a doubt, that means we must work together as an industry to ensure that security for AI scales as fast as the AI that’s so quickly changing our world.
Jeetu
Share: