- Black Friday deal: Save up to $1,100 on this Sony Bravia 7 and Bar 8 bundle at Amazon
- Grab the 55-inch Samsung Odyssey Ark for $1,200 off at Best Buy ahead of Black Friday
- Page Not Found | McAfee Blog
- What Are the Latest Docker Desktop Enterprise-Grade Performance Optimizations | Docker
- This $550 OnePlus flagship is the best Black Friday phone deal I've seen so far
Splunk Urges Australian Organisations to Secure LLMs
Splunk’s SURGe team has assured Australian organisations that securing AI large language models against common threats, such as prompt injection attacks, can be accomplished using existing security tooling. However, security vulnerabilities may arise if organisations fail to address foundational security practices.
Shannon Davis, a Melbourne-based principal security strategist at Splunk SURGe, told TechRepublic that Australia was showing increasing security awareness regarding LLMs in recent months. He described last year as the “Wild West,” where many rushed to experiment with LLMs without prioritising security.
Splunk’s own investigations into such vulnerabilities used the Open Worldwide Application Security Project’s “Top 10 for Large Language Models” as a framework. The research team found that organisations can mitigate many security risks by leveraging existing cybersecurity practices and tools.
The top security risks facing Large Language Models
In the OWASP report, the research team outlined three vulnerabilities as critical to address in 2024.
Prompt injection attacks
OWASP defines prompt injection as a vulnerability that occurs when an attacker manipulates an LLM through crafted inputs.
There have already been documented cases worldwide where crafted prompts caused LLMs to produce erroneous outputs. In one instance, an LLM was convinced to sell a car to someone for just U.S. $1, while an Air Canada chatbot incorrectly quoted the company’s bereavement policy.
Davis said hackers or others “getting the LLM tools to do things they’re not supposed to do” are a key risk for the market.
“The big players are putting lots of guardrails around their tools, but there’s still lots of ways to get them to do things that those guardrails are trying to prevent,” he added.
SEE: How to protect against the OWASP ten and beyond
Private information leakage
Employees could input data into tools that may be privately owned, often offshore, leading to intellectual property and private information leakage.
Regional tech company Samsung experienced one of the most high-profile cases of private information leakage when engineers were discovered pasting sensitive data into ChatGPT. However, there is also the risk that sensitive and private data could be included in training data sets and potentially leaked.
“PII data either being included in training data sets and then being leaked, or potentially even people submitting PII data or company confidential data to these various tools without understanding the repercussions of doing so, is another big area of concern,” Davis emphasised.
Over-reliance on LLMs
Over-reliance occurs when a person or organisation relies on information from an LLM, even though its outputs can be erroneous, inappropriate, or unsafe.
A case of over-reliance on LLMs recently occurred in Australia, when a child protection worker used ChatGPT to help produce a report submitted to a court in Victoria. While the addition of sensitive information was problematic, the AI generated report also downplayed the risks facing a child involved in the case.
Davis explained that over-reliance was a third key risk that organisations needed to keep in mind.
“This is a user education piece, and making sure people understand that you shouldn’t implicitly trust these tools,” he said.
Additional LLM security risks to watch for
Other risks in the OWASP top 10 may not require immediate attention. However, Davis said that organisations should be aware of these potential risks — particularly in areas such as excessive agency risk, model theft, and training data poisoning.
Excessive agency
Excessive agency refers to damaging actions performed in response to unexpected or ambiguous outputs from an LLM, regardless of what is causing the LLM to malfunction. This could potentially be a result of external actors accessing LLM tools and interacting with model outputs via API.
“I think people are being conservative, but I still worry that, with the power these tools potentially have, we may see something … that wakes everybody else up to what potentially could happen,” Davis said.
LLM model theft
Davis said research suggests a model could be stolen through inference: by sending high numbers of prompts into the model, getting various responses out, and subsequently understanding the components of the model.
“Model theft is something I could potentially see happening in the future due to the sheer cost of model training,” Davis said. “There have been a number of papers released around model theft, but this is a threat that would take a lot of time to actually prove it out.”
SEE: Australian IT spending to surge in 2025 in cybersecurity and AI
Training data poisoning
Enterprises are now more aware that the data they use for AI models determines the quality of the model. Further, they are also more aware that intentional data poisoning could impact outputs. Davis said certain files within models called pickle funnels, if poisoned, would cause inadvertent results for users of the model.
“I think people just need to be wary of the data they’re using,” he warned. “So if they find a data source, a data set to train their model on, they need to know that the data is good and clean and doesn’t contain things that could potentially expose them to bad things happening.”
How to deal with common security risks facing LLMs
Splunk’s SURGe research team found that, instead of securing an LLM directly, the simplest way to secure LLMs using the existing Splunk toolset was to focus on the model’s front end.
Using standard logging similar to other applications could solve for prompt injection, insecure output handling, model denial of service, sensitive information disclosure, and model theft vulnerabilities.
“We found that we could log the prompts users are entering into the LLM, and then the response that comes out of the LLM; those two bits of data alone pretty much gave us five of the OWASP Top 10,” Davis explained. “If the LLM developer makes sure those prompts and responses are logged, and Splunk provides an easy way to pick up that data, we can run any number of our queries or detections across that.”
Davis recommends that organisations adopt a similar security-first approach for LLMs and AI applications that has been used to protect web applications in the past.
“We have a saying that eating your cyber vegetables — or doing the basics — gives you 99.99% of your protections,” he noted. “And people really should concentrate on those areas first. It’s just the same case again with LLMs.”