Dark Web Markets Offer New FraudGPT AI Tool


Cybersecurity experts have identified a new AI tool called “FraudGPT,” circulating on the Dark Web and Telegram channels since July 22 2023. 

FraudGPT has been advertised as an all-in-one solution for cyber-criminals. Some of its features include crafting spear-phishing emails, creating undetectable malware, generating phishing pages, identifying vulnerable websites and even offering tutorials on hacking techniques.

“Generative AI tools provide criminals the same core functions that they provide technology professionals: the ability to operate at greater speed and scale,” explained John Bambenek, principal threat hunter at Netenrich.

“Attackers can now generate phishing campaigns quickly and launch more simultaneously.”

Netenrich’s threat research team has been closely monitoring the activities surrounding FraudGPT and the threat actor behind it. According to an advisory published by the firm on Tuesday, the threat actor had previously been an established vendor on various Dark Web marketplaces. 

However, in a strategic move to evade marketplace exit scams, the actor established a presence on Telegram, providing a more stable platform to offer their malicious services.

Read more on Telegram-enabled attacks: Telegram, WhatsApp Trojanized to Target Cryptocurrency Wallets

The subscription fees for FraudGPT range from $200 per month to $1700 per year, and the tool boasts over 3000 confirmed sales and reviews. 

To combat this escalating threat, experts emphasized the need for continuous innovation in cybersecurity defenses.

“OpenAI has been actively combating jailbreaking, but it’s been an ongoing struggle. Rules are created, rules are broken, new rules are created, those rules are broken, and on and on,” commented Pyry Åvist, co-founder and CTO at Hoxhunt.

“But perhaps the most important takeaway, given the emergence of black-hat GPT models, is that good security awareness, phishing and behavior change training work.”

According to the executive, users with more expertise in a security awareness and behavior change program demonstrated notable resilience against human and AI-generated phishing attacks via emails.

“Failure rates dropped from over 14% with less trained users to between 2-4% with experienced users,” Åvist explained.

The Netenrich advisory on FraudGPT comes just two weeks after SlashNext discovered WormGPT on July 13.

“The release of FraudGPT on the heels of WormGPT is just the start of many tools that leverage generative AI,” said SlashNext CEO, Patrick Harr.

“It is of the utmost importance for security teams to use tools that leverage AI to increase the speed, accuracy and automation required to stop these threats from turning into breaches.”



Source link