- Microsoft sees AI agents shaking up org charts, eliminating traditional functions
- Run out of Gmail storage? How I got another 15GB for free and without losing any files
- Your Google TV is getting a free feature upgrade - smart home users will love it
- Cisco automates AI-driven security across enterprise networks
- The camera I recommend to most new photographers is not a Nikon or Sony - and it's on sale
Foundation AI: Robust Intelligence for Cybersecurity

Today, we’re announcing a new organization at Cisco Security with a distinct mission. The team is called Foundation AI, and its mission is to create transformational AI technology for cybersecurity applications. The team has been hard at work for the past six months, since the acquisition of Robust Intelligence, on which it is based. In this post, we’ll describe the problem Foundation AI seeks to solve, guiding principles, and share some of the products it is releasing.
The Problem: Cybersecurity Is Not Yet Utilizing Modern AI to Its True Potential
Since ChatGPT broke out in late 2022, AI has had a transformational impact across a variety of verticals and continues to develop at breakneck pace. In consulting, healthcare, legal services, education, advertising, manufacturing, and media, AI is being used to automate knowledge work, accelerate discovery, personalize services, and it generally redefines the way in which information and products are created and delivered.
In the cybersecurity industry, AI hasn’t had a transformational impact yet one would expect. This is somewhat counterintuitive: cybersecurity products are data troves, and SOC analysts are drowning in work and could leverage any automation they can get.
What Is Blocking the AI Transformation in Cybersecurity?
- AI Models Are Not Purpose-Built for Cybersecurity: Most AI models are designed for general tasks (like language generation or image recognition), not the highly specialized, adversarial demands of cybersecurity — making them poorly suited for threat detection and defense without significant adaptation.
- Adversarial Nature of Cybersecurity and Lack of High-Quality, Diverse Training Data: Cybersecurity is inherently adversarial, with attackers constantly evolving tactics, while effective AI depends on large, diverse, and well-labeled datasets — but real cybersecurity incidents are rare, sensitive, often undisclosed, and difficult to label accurately, crippling model performance.
- Integration Challenges into Existing Security Systems: Most enterprise security infrastructures are complex and legacy-based, making it difficult to integrate AI solutions cleanly without disrupting workflows, increasing operational risk, and requiring major organizational change.
The pace of innovation in the broader AI landscape is breathtaking. Billions of dollars are being poured into research and development. Yet, the application of truly cutting-edge AI within many established cybersecurity products lags behind products in peer verticals. While some companies have made progress, their AI efforts often remain rooted in classic machine learning models for traditional endpoint detection. This growing disparity poses a significant risk, as cybersecurity products that fail to embrace advanced AI risk becoming obsolete.
Introducing Foundation AI
Today, we are thrilled to announce the launch of Foundation AI, a Cisco organization dedicated to creating open bleeding-edge AI technology to empower cybersecurity applications. Foundation AI is comprised of leading AI and security researchers and engineers, building from Robust Intelligence, which was recently acquired by Cisco.
Today, we are thrilled to announce the launch of Foundation AI, a Cisco organization dedicated to creating open bleeding-edge AI technology to empower cybersecurity applications. Foundation AI is comprised of leading AI and security researchers and engineers, building from Robust Intelligence, which was recently acquired by Cisco.
Open Innovation Is Crucial for Advancing Cybersecurity Applications
Modern security workflows involve chaining multiple LLM steps—planning, summarizing, recommending—and no single proprietary model is ideal for every task. Open-source models are critical because they allow teams to fine-tune for specific needs, swap in better models when necessary, and optimize for performance, latency, and reliability, all essential in high-pressure environments like threat detection.
Relying on closed, API-based models poses major challenges: high costs, lack of control, model deprecations, and barriers to customer deployment. Many cybersecurity organizations must run AI models directly in secure environments—no external SaaS allowed. Open-source models solve this by giving teams the ability to own, deploy, and secure their models.
Finally, open-source models are catching up—and in some cases surpassing—closed models. As we later describe, our base model, for example, matches or outperforms models like Llama 3.1 70B on real-world cybersecurity benchmarks, all while being far more efficient to deploy. Our specialized cybersecurity reasoning model shows that small open source models can beat general-purpose models three order of magnitude larger. We argue that open source isn’t just an alternative—it’s becoming the best path forward for building powerful, secure, and future-proof cybersecurity AI.
Foundation AI is Releasing Models, Tools, and Data for Cybersecurity Applications
- Foundation base model for cybersecurity applications. Our first release is a foundation model purposefully built for security applications. The model is an 8B parameter model, pre-trained on Llama using publicly-available cybersecurity data. The model is available for download on Hugging Face, and is described in details in a separate blog post focusing on the model itself, along with a technical report, model card, and other material to help adopt the model and apply it to SOC operations.
- The world’s first reasoning model built specifically for security applications. In addition to a base model, we will be releasing a model with reasoning capabilities designed to understand the complex relationships and context within security data, enabling more sophisticated analysis and decision-making. The model outperforms SOA models that are three orders of magnitude larger and will be made available later this summer.
- Novel benchmarks for evaluating cybersecurity models on real-world security use cases. Over the past six months of developing the technology, we found that the existing benchmarks do not necessarily capture the complexities of real-world security scenarios, such as understanding threat intelligence reports, analyzing malicious code, or triaging security alerts with high fidelity. We therefore decided to leverage the expertise of analysts within Cisco Security, Splunk, and other partners to create benchmarks to train and evaluate cybersecurity models. These benchmarks and data will be made available later in the summer as well.
- AI supply chain intelligence. In our journey at Robust Intelligence, we learned that one of the biggest problems CISOs face today is traditional security vulnerabilities in the AI supply chain. Model files, for example, that contain executable code, or that copyright-protected present an enterprise with AI Supply Chain Risk. Foundation AI will soon release AI supply chain and risk management (AI-SCRM) intelligence. We embedded this technology in Cisco’s Secure Endpoint and Email Threat Protection products, and as announced today, also in Secure Access.
We’re extremely excited about the mission and all that is ahead. We’re looking forward to unlock a new era in cybersecurity, one of Robust Intelligence. And more great puns.
We’d love to hear what you think. Ask a question, comment below, and stay connected with Cisco Security on social!
Cisco Security Social Channels
Share: