- The best robot vacuums for pet hair of 2024: Expert tested and reviewed
- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
- I compared a $190 robot vacuum to a $550 one. Here's my buying advice
- I finally found a reliable wireless charger for all of my Google devices - including the Pixel Watch
- 6 ways to turn your IT help desk into a strategic asset
NCSC Warns That AI is Already Being Used by Ransomware Gangs
In a newly published report, the UK’s National Cyber Security Centre (NCSC) has warned that malicious attackers are already taking advantage of artificial intelligence and that the volume and impact of threats – including ransomware – will increase in the next two years.
The NCSC, which is part of GCHQ – the UK’s intelligence, security and cyber agency, assesses that AI has enabled relatively unskilled hackers to “carry out more effective access and information gathering operations… by lowering the barrier of entry to novice cybercriminals, hacker-for-hire and hacktivists.”
We’ve seen scams and cyber attacks for decades, but scammers and other cybercriminals have often struggled to dupe their victims due to poor use of grammar and giveaway spelling mistakes in their emails and texts – especially if the attackers were not native speakers of the language being used to target victims.
Interestingly, other security researchers have questioned just how beneficial current artificial intelligence technology might be for cybercriminals crafting attacks. In December 2023, a study was released, finding that the efficacy of phishing emails was the same regardless of whether they were written by a human or an artificial intelligence chatbot.
What is clear, however, is that publicly-available AI tools have made it practically child’s play to generate not only believable text but also convincing images, audio, and even deepfake video that can be used to dupe targets.
Furthermore, the NCSC’s report, entitled “The Near-Term Impact of AI on the Cyber Threat,” warns that the technology can be used by malicious hackers to identify high-value data for examination and exfiltration, maximising the impact of security breaches.
Chillingly, the NCSC warns that by 2025, it believes “Generative AI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”
That is frankly terrifying.
In case you hadn’t noticed, 2025 is less than one year away.
Fortunately, it’s not all bad news when it comes to artificial intelligence.
AI can also be used to enhance the resilience of an organisation’s security through improved detection of threats such as malicious emails and phishing campaigns, ultimately making them easier to counteract.
As with many technological advances, AI can be used for good as well as bad.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.