- The best Amazon deals right now: January 2025
- Microsoft's Copilot AI is coming to your Office apps - and it won't come cheap
- Should you replace your Wi-Fi router with a VPN-ready one? Here's how mine fared
- How AI can help you manage your finances (and what to watch out for)
- Agentic AI: ecco perché questa tecnologia emergente rivoluzionerà diversi settori
5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025
Australian cybersecurity professionals can expect threat actors to exploit artificial intelligence to diversify tactics and scale the volume of cyberattacks targeting organisations in 2025, according to security tech firm Infoblox.
Last year, cyber teams in APAC witnessed the first signs of AI being used to execute crimes like financial fraud, while some have linked AI to a DDoS attack in the financial services sector in Australia.
This year, Australia’s cyber defenders can expect AI to be used for a new breed of cyber attacks:
- AI cloning: AI could be used to create synthetic audio voices to commit financial fraud.
- AI deepfakes: Convincing fake videos could lure victims to click or provide their details.
- AI-powered chatbots: AI chatbots could become part of complex phishing campaigns.
- AI-enhanced malware: Criminals could use LLMs to spit out remixed malware code.
- Jailbreaking AI: Threat actors will use “dark” AI models without safeguards.
Infoblox’s Bart Lenaerts-Bergmans told Australian defenders on a webinar that they can expect an increase in the frequency and sophistication of attacks because more actors have access to AI tools and techniques.
1. AI for cloning
Adversaries can use generative AI tools to create synthetic audio content that sounds realistic. The cloning process, which can be done quickly, leverages data available in the public domain, such as an audio interview, to train an AI model and generate a cloned voice.
SEE: Australian government proposes mandatory guardrails for AI
Lenaerts-Bergmans said cloned voices can exhibit only minor differences in intonation or pacing compared to the original voice. Adversaries can combine cloned voices with other tactics, such as spoofed email domains, to appear legitimate and facilitate financial fraud.
2. AI deepfakes
Criminals can use AI to create realistic deepfake videos of high-profile individuals, which they can use to lure victims into cryptocurrency scams or other malicious activities. The synthetic content can be used to more effectively social engineer and defraud victims.
Infoblox referenced deepfake videos of Elon Musk uploaded to YouTube accounts with millions of subscribers. Using QR codes, many viewers were directed to malicious crypto sites and scams. It took 12 hours for YouTube to remove the videos.
3. AI-powered chatbots
Adversaries have begun using automated conversational agents, or AI chatbots, to build trust with victims and ultimately scam them. The technique mimics how an enterprise may use AI to combine human-driven interaction with the AI chatbot to engage and “convert” a person.
One example of crypto fraud involves attackers using SMS to build relationships before incorporating AI chatbot elements to advance their scheme and gain access to a crypto wallet. Infoblox noted that warning signs of these scams include suspicious phone numbers and poorly designed language models that repeat answers or use inconsistent language.
4. AI-enhanced malware
Criminals can now use LLMs to automatically rewrite and mutate existing malware to bypass security controls, making it more difficult for defenders to detect and mitigate. This can occur multiple times until the code achieves a negative detection score.
SEE: The alarming state of Australian data breaches in 2024
For example, a JavaScript framework used in drive-by download attacks could be fed to an LLM. This can be used to modify the code by renaming variables, inserting code, or removing spaces to bypass typical security detection measures.
5. Jailbreaking AI
Criminals are bypassing safeguards of traditional LLMs like ChatGPT or Microsoft Copilot to generate malicious content at will. Called “jailbroken” AI models, they already include the likes of FraudGPT, WormGPT, and DarkBERT, which have no in-built legal or ethical guardrails.
Lenaerts-Bergmans explained that cybercriminals can use these AI models to generate malicious content on demand, such as creating phishing pages or emails that mimic legitimate services. Some are available on the dark web for just $100 per month.
Expect detection and response capabilities to become less effective
Lenaerts-Bergmans said AI threats may result in security teams having intelligence gaps, where existing tactical indicators like file hashes may become completely ephemeral.
He said “detection and response capabilities will drop in effectiveness” as AI tools are adopted.
Infoblox said detecting criminals at the DNS level allows cyber teams to gather intelligence earlier in the cybercriminal’s workflow, potentially stopping threats before they escalate to an active attack.