- 대통령 측근이 ‘AI 차르’로?···“트럼프, 상원 인준 필요 없는 AI 총괄 임명 추진” 악시오스
- Equinix to cut 3% of staff amidst the greatest demand for data center infrastructure ever
- One of the best portable speakers I tested is $40 off for Black Friday: Get this music powerhouse in a small package
- Best Black Friday gaming PC deals 2024: Sales live now on prebuilt PCs, GPUs, monitors, and more
- Level up your PS5 with this PlayStation VR2 bundle for $250 off before Black Friday
Are GPT-Based Models the Right Fit for AI-Powered Cybersecurity?
A growing number of cybersecurity vendors are integrating large-language model-based (LLM) tools into their offerings. Many are opting to use OpenAI’s GPT model.
Microsoft launched its GPT-4-powered Security Copilot in March and in April Recorded Future added a new research feature using OpenAI’s model trained on 40,000 threat intelligence data points.
Software supply chain security provider OX Security followed in May, Security Service Edge (SSE) platform provider Netskope and email security developer Ironscales announced GPT-powered functionalities during Infosecurity Europe in June.
Many other vendors are looking to levering LLMs as well. During Infosecurity Europe, Mayur Upadhyaya, CEO of API security provider Contxt told Infosecurity that his company had “secured an innovation grant in 2021, before the emergence of foundational models, to build a machine learning model for personal data detection, with a proprietary dataset. We are now trying to see how we can leverage foundational models with this dataset.”
Non-Deterministic AI Algorithms
LLMs are not the first type of AI that’s been integrated into cybersecurity products, with many Infosecurity Europe exhibitors – the likes of BlackBerry Cyber Security’s Cylance AI, Darktrace, Ironscales and Egress – leveraging AI in their products.
However, although it’s difficult to say what AI algorithms cybersecurity vendors have used, they are very likely deterministic.
Jack Chapman, VP of threat intelligence at Egress, told Infosecurity that his company was using “genetic programming, behavioral analytics-based algorithms, as well as social graphs.”
Ronnen Brunner, SVP of International Sales at Ironscales, said during his presentation at Infosecurity Europe that his firm was using “a broad range of algorithms, including some leveraging natural language processing (NLP), but not LLMs yet.”
According to Nicolas Ruff, a senior software engineer at Google, most AI algorithms used in cybersecurity are classifiers, a type of machine learning algorithm used to assign a class label to a data input.
These and all the above-mentioned machine learning models differ from LLMs and other generative AI models because they work in a closed loop and have built-on restrictions.
LLMs have been built on massive training sets. They’re also designed to guess the most probable words following a given prompt. These two features make them probabilistic and not deterministic – which means they provide the most probable answer, not necessarily the right one.
Just Another Tool in the Toolbox
Current general-purpose LLMs tend to hallucinate, which means they will give a convincing response but one that is entirely wrong.
Speaking to Infosecurity during Infosecurity Europe, Jon France, CISO of the non-profit (ISC)2, acknowledged that this makes current LLMs a risky tool for cybersecurity practices, where accuracy and precision are critical.
“LLMs can still be useful for various security purposes, like crafting security policies for everyone to understand,” he added.
Ganesh Chellappa, the head of support services at ManageEngine, agreed: “Anyone who has been using any user and entity behavior analytics (UEBA) solutions for many years has a huge amount of data that is just sitting there that they were never able to use. Now that LLMs are here, it’s not even a question; we must try and leverage them to make use of this data.”
Meanwhile, Chapman argued: “They can also be helpful for cybersecurity practitioners as a data pre-processing tool in areas such as anomaly detection (email security, endpoint protection…) or threat intelligence.”
At this stage of development, France and Chapman insisted that the key thing to remember in using LLMs in cybersecurity is “to consider them as another tool in the toolbox – and one that should never be responsible for executive tasks.”
Open Source LLMs
According to Chellappa, the hallucination concerns will largely be solved when cybersecurity firms develop their own models from open source frameworks like Meta’s LLaMA or Stanford University’s Alpaca and use them to train their own datasets.
However, SoSafe’s CEO, Dr. Niklas Hellemann, warned that the open source models won’t solve another growing issue LLM-based tools face: model poisoning.
Model poisoning refers to hacking techniques where an adversary can inject bad data into your model’s training pool and get it to learn something it shouldn’t.
“Open source models like LLaMA are already targeted with these attacks,” Hellemann told Infosecurity.