- The best Galaxy Z Flip 6 cases of 2024
- This retractable USB-C charger is my new favorite travel accessory (and it's on sale for Black Friday)
- Skip the iPad: This tablet is redefining what a kids tablet can do, and it's 42% off for Black Friday
- Why the iPad Mini 7 is the ultraportable tablet to beat this holiday travel season - and it's $50 off
- The best iPads for college: Expert tested and reviewed
AI-assisted cybersecurity: 3 key components you can’t ignore
Over the last year, we saw the explosive use of OpenAI’s ChatGPT accompanied by layman’s fears of the Artificial General Intelligence (AGI) revolution and forecasted disruptions to markets. Without a doubt, AI will have a massive and transformative impact on much of what we do, but the time has come for a more sober and thoughtful look at how AI will change the world, and, specifically, cybersecurity. Before we do that, let’s take a moment to talk about chess.
In 2018, one of us had the opportunity to hear and briefly speak to Garry Kasparov, the former world chess champion (from 1985 to 2000). He talked about what it was like to play and lose to Deep Blue, IBM’s chess-playing supercomputer, for the first time. He said it was crushing, but he rallied and beat it. He would go on to win more than lose.
That changed over time: he would then lose more than win, and eventually, Deep Blue would win consistently. However, he made a critical point: “For a period of about ten years, the world of chess was dominated by computer-assisted humans.” Eventually, AI alone dominated, and it’s worth noting that today the stratagems used by AI in many games baffle even the greatest masters.
The critical point is that AI-assisted humans have an edge. AI is really a toolkit made up largely of machine learning and LLMs, many of which have been applied for over a decade to tractable problems like novel malware detection and fraud detection. But there’s more to it than that. We are in an age where breakthroughs in LLMs dwarf what has come before. Even if we see a market bubble burst, the AI genie is out of the bottle, and cybersecurity will never be the same.
Before we continue, let’s make one last stipulation (borrowed from Daniel Miessler) that AI so far has understanding, but it does not show reasoning, initiative, or sentience. And this is critical for allaying the fears and hyperbole of machine takeover, and for knowing that we are not yet in an age where the silicon minds duke it out without carbon brains in the loop.
Let’s dig into three aspects at the interface of cybersecurity and AI: the security of AI, AI in defense, and AI in offense.
Security of AI
For the most part, companies are faced with a dilemma much like the advent of instant messaging, search engines, and cloud computing: they have to adopt and adapt or face competitors with a disruptive technological advantage. That means that they can’t simply outright block AI if they want to remain relevant. As with those other technologies, the first move is to create private instances of LLMs in particular, as the public AIs scramble like the public cloud providers of old to adapt and meet the market needs.
Borrowing the language of the cloud revolution for the era of AI, those looking to private, hybrid, or public AI need to think carefully about a number of issues, not least of which are privacy, intellectual property, and governance.
However, there are also issues of social justice since data sets can suffer from biases on ingestion, models can suffer from inherited biases (or hold a mirror up to us showing us truths in ourselves that we should address) or can lead to unforeseen consequences in output. With this in mind, the following are critical to consider:
- Ethical use review board: the use of AIs must be governed and monitored for correct and ethical usage, much as other industries govern research and use as healthcare does with cancer research.
- Controls on data sourcing: there are copyright issues, of course, but also privacy considerations on ingestion. Even if infernal can re-identify data, anonymization is important as is looking for poisoning attacks and sabotage.
- Controls on access: access should be for specific uses in research and by uniquely named and monitored people and systems for post facto accountability. This includes data grooming, tuning, and maintenance.
- Specific and general output: output should be for a specific, business-related purpose and application, and there should be no general interrogation allowed or open API access unless agents using that API are similarly controlled and managed.
- Security of AI role: consider a dedicated AI security and privacy manager. This person focuses on attacks that practice evasion (recovering features and input used to train a model), inference (iterative querying to get a desired outcome), monitor for insanity (i.e., hallucination, lying, imagination, etc.), functional extraction, and long-term privacy and manipulation. They also review contracts, tie into legal, work with supply chain security experts, interface with teams that work with the AI toolkits, ensure factual claims in marketing (we can dream!), and so on.
AI in defense
There are also, however, applications of AI in the practice of cybersecurity itself. This is where the AI-assisted human paradigm becomes an important consideration in how we envision future security services. The applications are many, of course, but everywhere there is a rote task in cybersecurity, from querying and scripting to integration and repetitive analytics, there is an opportunity for the discrete application of AI. When a carbon-brained human has to perform a detailed task at scale, human error creeps in, and that carbon unit becomes less effective.
Human minds excel at tasks related to creativity, inspiration, and the things a silicon brain isn’t good at reasoning, sentience, and initiative. The greatest potential for silicon, AI application in cyber defense, is in process efficiencies, data set extrapolations, rote task elimination, and so on — so long as the dangers of leaky abstraction are avoided, where the user doesn’t understand what the machine is doing for them.
For example, the opportunity for a guided incident response that can help project an attacker’s next steps, help security analysts learn faster and increase efficiency in human-machine interface with a co-pilot (not an autopilot) approach is developing right now. Yet, we need to make sure those who have the incident response flight assistance understand what is put in front of them, can disagree with the suggestions, make corrections, and apply their uniquely human creativity and inspiration.
If this is starting to feel a little like our previous article on automation, it should! Many of the issues highlighted there, such as creating predictability for attackers to exploit by automating, can now be accounted for, and addressed with applications of AI technology. In other words, the use of AI can make the automation mindset more feasible and effective. For that matter, the use of AI can make the use of a zero trust platform for parsing the IT outback’s “never never” much more effective and useful. To be clear, these are not free or simply given by deploying LLMs and the rest of the AI toolkit, but they become tractable, manageable projects.
AI in offense
Security itself needs to be transformed because the adversaries themselves are using AI tools to supercharge their own transformation. In much the same way that businesses can’t ignore the use of AI as they risk being disrupted by competitors, Moloch drives us in cybersecurity because the adversary is also using it. This means that people in security architecture groups have to join the corporate AI review boards mentioned earlier and potentially lead the way, considering the adoption of AI:
- Red teams need to use the tools the adversary does
- Blue teams need to use them in incidents
- GRC need to use them to gain efficiencies in natural language-to-policy interpretation
- Data protection must use them to understand the true flow of data
- Identity and access must use them to drive zero trust and to get progressively more unique and specific entitlements closer to real time
- Deception technologies need them to gain negative trust in our infrastructure to foil the opponent
In conclusion, we are entering an era not of AI dominance over humans but one of potential AI-assisted human triumph. We can’t keep the AI toolkits out because competitors and adversaries are going to use them, which means the real issue is how to put the right guidelines in place and how to flourish. In the short term, the adversaries in particular are going to get better at phishing and malware generation. We know that. However, in the long term, the applications in defense, the defenders of those who build amazing things in the digital world, and the ability to triumph in cyber conflict far outstrip the capabilities of the barbarians and vandals at the gate.
To see how Zscaler is helping its customers reduce business risk, improve user productivity, and reduce cost and complexity, visit https://www.zscaler.com/platform/zero-trust-exchange.