UK Study: Generative AI May Increase Ransomware Threat
The U.K.’s National Cyber Security Centre has released a new study that finds generative AI may increase risks from cyber threats such as ransomware.
Overall, the report found that generative AI will provide “capability uplift” to existing threats as opposed to being a source of brand new threats. Threat actors will need to be sophisticated enough to gain access to “quality training data, significant expertise (in both AI and cyber), and resources” before they can take advantage of generative AI, which the NCSC said is not likely to occur until 2025. Threat actors “will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models” going forward.
How generative AI may ‘uplift’ attacks
“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” wrote NCSC CEO Lindy Cameron in a press release. “The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.”
The report sorted threats (Figure A) by potential for “uplift” from generative AI and by the types of threat actors: nation-state sponsored, well-organized and less-skilled or opportunistic attackers.
Figure A
The generative AI threat extending to 2025 comes from “evolution and enhancement of existing tactics, techniques and procedures,” not brand-new ones, the report found.
AI services lower the barrier to entry for ransomware attackers
Ransomware is expected to continue to be a dominant form of cyber crime, the report said. Similarly to how attackers offer ransomware-as-a-service, they now offer generative AI-as-a-service as well, the report said.
SEE: A recent malware botnet snags cloud credentials from AWS, Microsoft Azure and more (TechRepublic)
“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods,” stated James Babbage, director general for threats at the National Crime Agency, as quoted in the NCSC’s press release about the study.
Ransomware actors are already using generative AI for reconnaissance, phishing and coding, a trend that the NCSC expects to continue “to 2025 and beyond.”
Social engineering can be facilitated by AI
Social engineering will see a lot of uplift from generative AI over the next two years, the survey found. For example, generative AI will be able to remove the spelling and grammar errors that often mark spam messages. After all, generative AI can create new content for attackers and defenders.
Phishing and malware attackers could use AI – but only sophisticated ones are likely to have it
Similarly, threat actors can use generative AI to gain access to accounts or password information in the course of a phishing attack. However, it will take advanced threat actors to use generative AI for malware, the report said. In order to create malware that can evade today’s security filters, a generative AI would need to be trained on large amounts of high-quality exploit data. The only groups likely to have access to that data today are nation-state actors, but the report said there is a “realistic possibility” that such repositories exist.
Vulnerabilities may come at a faster pace due to AI
Network managers looking to patch vulnerabilities before they are exploited may find their jobs becoming more difficult as generative AI speeds up the time between vulnerabilities being identified and exploited.
How defenders can use generative AI
The NCSC pointed out that some of the benefits generative AI provides to cyberattackers can benefit defenders as well. Generative AI can help find patterns to speed up the time it takes to detect or triage attacks and identify malicious emails or phishing campaigns.
In order to improve global defenses against attackers using generative AI, the UK organized the creation of the Bletchley Declaration in November 2023 as a guideline for addressing forward-looking AI risk.
The NCSC and some UK private industry organizations have adopted AI for improved threat detection and security-by-design under the £2.6 billion ($3.3 billion) Cyber Security Strategy announced in 2022.