- The Dyson Airwrap is $120 off ahead of Black Friday - finally
- This 5-in-1 charging station replaced several desk accessories for me (and it's 33% off for Black Friday))
- The best Galaxy Z Flip 6 cases of 2024
- This retractable USB-C charger is my new favorite travel accessory (and it's on sale for Black Friday)
- Skip the iPad: This tablet is redefining what a kids tablet can do, and it's 42% off for Black Friday
Liars in the wires: Getting the most from GenAI without getting duped
Before, artificial intelligence (AI) and machine learning (ML) required programming languages. Now, simple text interfaces enable everyone to interact with powerful models that are seemingly limitless. A University of California, San Diego study found that GPT4 has passed the TuringTest, with 54% of participants mistaking GPT4s responses as coming from a human. Many of the latest AI enabled tools can make you feel like you’ve mastered new subjects far and wide, unlocking vast riches and capabilities at first glance. Until you submit those results to true experts in those fields, and end up sanctioned like the legal counsel in New York for generating fake case matter. AI was going to change the world, until it didn’t.
There are a few industries that have remained skeptical of large AI claims, with cybersecurity amongst them. AI has proven to be a hard sell in the computer security space. Perhaps due to the trauma of the early 2010s ML and UEBA that was going to automate detection of all the threats with zero false positives.
So why is AI and ML challenged with cybersecurity?
There are notable challenges with AI/ML in cybersecurity: Cybersecurity deals with finding extraordinarily rare events, for which there is a very high penalty in failure, and the findings need to be explainable.
Even though it seems like every day there is news of another intrusion or ransomware attack, these events are rare in comparison the the quadrillions of normal events generated each day. This poses a challenge for AI and ML, which gravitate towards the most common nearby explanation, when cybersecurity events are themselves one of the many less likely explanations.
There is a high penalty for errors in cybersecurity, while the areas where AI and ML have been most successful have a low penalty for false positives. In cybersecurity, the most common situations result in a mistaken explanation adding more false positive alerts to the team’s load, eroding trust in the system they are relying on for help. At it’s worst, a mistaken result by AI leads to an alert that would normally be raised being overlooked, and an intrusion that could have been stopped being overlooked.
Finally, cybersecurity requires explainable findings, but our GenAI copilots can’t testify. GenAI is a habitual liar, sometimes convincingly so, many times providing false references and explanations. In cybersecurity, analysts often need accurate references to better understand the stimuli they are evaluating. In other cases, accurate references and explainable results are necessary for court cases, insurance settlements, and liability claims. This is an area that will likely improve over time as GenAI systems become embedded as research assistants, though for today, we can’t trust these liars in the wires.
The rest of the world is talking about the benefits of GenAI, what can it do for cybersecurity?
Most cybersecurity jobs are exercises in context switching from one urgent fire to the next. Those context switches are productivity killers. GenAI has proven most useful for drafting code, with vendors such as AWS claiming code assistants can help developers complete tasks 28% faster than without. I wouldn’t be surprised if that development speed gain is faster in cybersecurity, where the context switches are more common. In our field, the code assistant can help pull analysts into the task of writing data ingestion parsers, detection rules, or automated response and enrichment scripts.
Every cybersecurity group I’ve worked with has struggled with post incident write ups, often because the team members are paralyzed by writer’s block. Generating the start of technical documents is a great way to break through the block wall facing these writers, and with the added bonus that giving security analyst’s a sort-of but not quite correct report is a surefire way to nerd-snipe their undivided attention! This is an area where GenAI shines.
GenAI may be appropriately named, similar to sales lead-gen, where GenAI gives a starting point to work from, requiring skilled practitioners to take that start molding it to a finished and accurate product. My words, not AI’s.