Generative AI poses several security risks

Generative artificial intelligence (GenAI) was analyzed in a recent report by Transmit. The report includes screenshots of dark web forums, marketplaces and subscription-based services. 

According to the report, Blackhat Gen-AI tools make it easier to create and automate fraud campaigns, resulting in an increased volume, velocity and variety of attacks. GenAI tools automate pentesting to find enterprise vulnerabilities and circumvent security used by specific targets.

Configuration (config) files, generated with the assistance of GenAI, are used for validating accounts and can validate up to 500 credentials per minute, according to the report. Bundled services like Remote Desktop Protocols (RDPs) and credit card checkers are augmented by AI to streamline attack creation.

The report found that GenAI rapidly generates real or synthetic identity data to create hard-to-detect fraudulent accounts aged with eight-plus years of order history to appear legitimate. GenAI makes it easy to create high-quality fake IDs that are able to bypass security checks, including most AI-driven identity verification.

Video and voice deepfakes lure victims into scams, while voice cloning is able to trick call center voice authentication systems, according to the report. Dark web markets offer 24/7 escrow and high seller ratings up to 4.99/5 to assure purchasers of product efficacy.

The report included advise to mitigate GenAI threats, such as implementing fraud prevention, identity verification and customer identity management services.

GenAI can be beneficial as a fraud analytics tool and can be used to query an organization’s identity data to generate graphs or insights about end users, devices, risk or trust events, attack types and other information — to adapt to rapidly-emerging trends.

Read the report



Source link