- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
FBI Warns GenAI is Boosting Financial Fraud
The FBI has warned that criminals are using generative AI to enhance financial fraud schemes, and the Bureau has issued new guidance for the public to protect themselves from these tactics.
A new alert from the US government agency’s Internet Crime Complaint Center (IC3) highlighted how these tools enable malicious actors to commit fraud on a larger scale and increase the believability of their schemes.
GenAI-enabled tactics include impersonating victims loved ones to demand ransom payments and gaining access to bank accounts.
Read now: Report Shows AI Fraud, Deepfakes Are Top Challenges For Banks
How GenAI Tools Are Used to Facilitate Fraud
The FBI has observed GenAI tools being used to assist with fraud in a number of ways.
Crafting More Realistic Written Messages
Criminals use tools like OpenAI’s ChatGPT to enhance written messages for social engineering attacks, such as romance and investment scams.
The tools assist with language translation to limit grammatical or spelling errors for foreign criminals targeting US citizens. This removes human errors that might otherwise serve as warning signs of fraud.
Messages can also be created faster, enabling a wider audience to be reached.
Additionally, AI-powered chatbots are being embedded in fraudulent websites to prompt victims to click on malicious links.
The FBI added that GenAI is enabling fraudsters to create voluminous fictitious social media profiles that trick victims into sending money.
Generating Fake Images
Criminals are using AI-generated images to create believable social media profile photos, identification documents, and other images in support of their fraud schemes.
This includes producing photos to share with victims in private communications to convince victims they are speaking to a real person.
Other common uses for AI-generated images include creating images of celebrities or social media personas to promote counterfeit products.
There has also been evidence of GenAI pornographic photos of victims used to demand payment in sextortion schemes.
Impersonating Individuals’ Voice and Video
The FBI said deepfake technology is now being frequently used to clone individuals’ voices and videos to commit major fraud schemes.
This includes generating short audio clips to impersonate a close relative of a victim and asking for immediate financial assistance or demanding a ransom.
Another example shows criminals attempting to bypass verification checks and obtain access to bank accounts by finding audio clips of individuals and impersonating them through AI.
AI-generated videos are also being used to set up real-time video chats with alleged company executives and other authority figures to trick employees into making payments.
How to Defend Against AI-Generated Scams
The FBI issued guidance for the public to detect these types of AI-generated scams:
- Create a secret word or phrase with your family to verify their identity
- Look for subtle imperfections in images and videos, such as distorted hands or feet
- Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal cloning
- Limit online content of your image or voice, make social media accounts private and limit followers to people you know
- Verify the identity of the person calling you by hanging up the phone, researching the contact of the bank or organization purporting to call you, and call the phone number directly
- Never share sensitive information with people you have met only online or over the phone
- Do not send money, gift cards, cryptocurrency or other assets to people you do not know