- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
Beyond the Hype: Understanding the True Value of AI/ML in Security Environments
By Matt Kraning, CTO, Cortex
Artificial intelligence (AI) and machine learning (ML) are terms that are heard everywhere across the IT security landscape today, as organizations and attackers are both seeking to leverage these advancements in service of their goals. For the bad actors, it’s about breaking down defenses and finding vulnerabilities faster. But what value can AI and ML offer when you’re working to secure an organization?
It would be great to say that these technologies are an end to themselves for your cybersecurity and that merely adopting them means your organization is fully protected. But it’s not that simple. Not all uses of AI and ML are created equal. And—spoiler alert—it’s not all about using the latest algorithms.
However, in order to meet the challenges and speed of today’s threat landscape, AI and ML are vital parts of a holistic security solution and should be focused on the ultimate outcome of preventing every type of attack you can and responding as fast as possible to the ones you can’t.
AI alone is not an answer
Artificial intelligence itself is not a differentiator for security. In fact, there are many different AI frameworks and models in common usage today. Generally speaking, those frameworks come from academia and are open-source, public implementations available to everyone. So, it’s not the AI framework that makes a difference. What differentiates is how the AI is used and what data is available for AI to learn from.
What makes AI better and smarter for cybersecurity?
Regardless of the purpose, AI that learns how to act via machine learning needs high-quality data and as much data as possible to be effective. It’s through that abundance of good data that AI comes to have an understanding of possible scenarios. The more real-world data it acquires, the smarter it becomes and the more experience it can leverage.
So, think about this through the lens of cybersecurity. Learning from just one deployment or threat vector isn’t enough. What’s needed is a solution that learns from all deployments and a tool that leverages information from all its users—not just a single organization. The bigger the pool of environments and users, the smarter the AI. To that end, you also need a system that can handle both large volumes—and different kinds—of data.
AI is about more than just simply doing math with a computer. While data is a critical component for AI to be effective, the AI and ML itself also need to be baked into operational processes. AI and ML should not be thought of as stand-alone technologies but rather as enabling technologies that bring value to security processes and operations.
The most successful AI techniques are the ones that combine large-scale statistical pattern matching from ML to learn, along with other techniques integrating things like domain knowledge to provide a hybrid system. Statistical techniques derived solely from ML are generally unable to adapt to newly developed, previously unseen threats that by definition have little to no baseline statistics associated with them. Similarly, domain expertise can be leveraged to create logic (often partly derived from large-scale data analysis) that effectively prevents and detects specific attacker tactics and techniques.
However, aggregating these insights using expert systems results in unbalanced and skewed error rates across deployments. What’s needed is an AI system that uses statistical insights from ML together with domain-driven insights from other parts of the system that can generalize to novel attacks while maintaining consistent and low-error rates for all.
The value AI and ML truly provide for cybersecurity
At a fundamental level, using AI and ML well in your organization’s security enables security operations center (SOC) teams to do a lot more effectively, with fewer people. It’s a multiplying factor that strengthens an organization’s capacity and allows analysts’ skills to be put towards the right work to leverage their experience.
A common use case for AI and ML in security is to help establish a baseline of normal operations and then alert a team to potential anomalies. AI and ML can also be used to improve operational effectiveness by identifying the more mundane tasks that people are doing all the time. The technology can create or suggest automation playbooks that will save time and resources.
AI and ML also help inform and power automation—which is the key to scalability in environments where staff and resources are always constrained. Every SOC today needs to address more threats that are more sophisticated, with fewer people. At the end of the day, the goal of AI and ML is to help provide a good security outcome in a way that specifically makes rapid use of very scarce resources.
How AI and ML can improve security outcomes
With security operations, there is never just one problem that needs to be solved, but rather a series of problems that are often coupled. With AI and ML helping to improve automation and remove manual processes across security operations, it can be possible to prevent more risks from becoming security incidents. If you prevent more risks, then the organization can respond more effectively, as it will be responding to fewer actual security incidents.
AI and ML give you the benefit of focus and the power to scale with the threat landscape by leveraging the same tools as the attackers, strengthening your organization’s overall security posture.
To learn more, visit us here.
About Matt Kraning
Matt Kraning is the CTO of Cortex at Palo Alto Networks. He’s an expert in large-scale optimization, distributed sensing, and machine learning algorithms run on massively parallel systems. Prior to co-founding Expanse, Matt worked for DARPA, including a deployment to Afghanistan. Matt holds PhD and Master’s degrees in Electrical Engineering, and a Bachelor’s degree in Physics, all from Stanford University.