- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
NCSC Issues Cyber Warning Over AI Chatbots
Organizations have been warned about the cyber risks of large language models (LLMs), including OpenAI’s ChatGPT, by the UK’s National Cyber Security Centre (NCSC).
In a new post, the UK government agency urged caution when building integrations with LLMs into services or businesses. The NCSC said AI chatbots occupy a “blind spot” in our understanding, and the global tech community “doesn’t yet fully understand LLM’s capabilities, weaknesses and (crucially) vulnerabilities.”
The NCSC noted that while LLMs are fundamentally machine learning technologies, they are showing signs of general AI capabilities – something academia and industry are still trying to understand.
A major risk highlighted in the blog was prompt injection attacks, in which attackers manipulate the output of LLMs to launch scams or other cyber-attacks. This is because research suggests that LLMs inherently cannot distinguish between an instruction and data provided to help complete the instruction, said the NCSC.
This can lead to reputational risk to an organization, such as chatbots being subverted to say upsetting or embarrassing things.
Additionally, prompt injection attacks can have more dangerous outcomes. The NCSC gave a scenario of an attack on an LLM assistant used by a bank to allow account holders to ask questions. Here, an attacker may be able to launch a prompt injection attack that reprograms the chatbot into sending the user’s money to the attacker’s account.
The NCSC noted that research is ongoing into possible mitigations for these types of attacks, but there “are no surefire mitigations” as yet. It said we may need to apply different techniques to test applications based on LLMs, such as social engineering-like approaches to convince models to disregard their instructions or find gaps in instructions.
Be Cautious of Latest AI Trends
The NCSC also highlighted the risks of incorporating LLMs in the rapidly evolving AI market. Therefore, organizations that build services that user LLM APIs “need to account for the fact that models might change behind the API you’re using (breaking existing prompts), or that a key part of your integrations might cease to exist.”
The blog concluded: “The emergence of LLMs is undoubtedly a very exciting time in technology. This new idea has landed – almost completely unexpectedly – and a lot of people and organizations (including the NCSC) want to explore and benefit from it.
“However, organizations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta. They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it yet. Similar caution should apply to LLMs.”
Commenting on the NCSC’s warning, Oseloka Obiora, chief technology officer at RiverSafe, argued the race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks.
“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions and data breaches.
“Instead of jumping into bed with the latest AI trends, senior executives should think again, assess the benefits and risks as well as implementing the necessary cyber protection to ensure the organization is safe from harm,” commented Obiora.
Register here: Embracing ChatGPT – Unleashing the Benefits of LLMs in Security Operations