- Have The Last Word Against Ransomware with Immutable Backup
- Multi-channel Secure Communication
- Apple's bold idea for no-code apps built with Siri - hype or hope?
- The camera I recommend to most new photographers is not a Nikon or Sony
- I tested LG's new ultrathin 2-in-1, and it handles creative workloads like a dream
Microsoft Thwarts $4bn in Fraud Attempts

Microsoft has blocked billions of dollars’ worth of fraud and scams over the course of the past year as threat actors increase their use of AI and automation.
The tech giant said in a Cyber Signals report yesterday that it thwarted $4bn fraud attempts, rejected 49,000 fraudulent partnership enrolments and blocked 1.6 million bot signup attempts per hour.
It pointed to three specific areas where AI is helping threat actors to improve outcomes.
The first is e-commerce fraud, where AI tools are empowering scammers to build lookalike sites to harvest information and sell non-existent items. These can be set up in minutes, whereas previously the process would have taken days or weeks, Microsoft said.
“Using AI-generated product descriptions, images, and customer reviews, customers are duped into believing they are interacting with a genuine merchant, exploiting consumer trust in familiar brands,” it said.
“AI-powered customer service chatbots add another layer of deception by convincingly interacting with customers. These bots can delay chargebacks by stalling customers with scripted excuses and manipulating complaints with AI-generated responses that make scam sites appear professional.”
Read more on GenAI: FBI Warns GenAI is Boosting Financial Fraud
A second area of focus for scammers is employment fraud.
Generative AI (GenAI) tools are enabling threat actors to create fake job listings with the aim of stealing sensitive information from job seekers.
“They generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers. AI-powered interviews and automated emails enhance the credibility of job scams, making it harder for job seekers to identify fraudulent offers,” the report explained.
“Fraudsters often ask for personal information, such as resumes or even bank account details, under the guise of verifying the applicant’s information. Unsolicited text and email messages offering employment opportunities that promise high pay for minimal qualifications are typically an indicator of fraud.”
AI Arms Tech Support Scammers
Finally, AI is helping fraudsters make tech support scams more successful, Microsoft said.
It highlighted voice phishing (vishing) campaigns from the Storm-1811 group, which convinced victims to give them access to their machines via Quick Assist. In these attacks, GenAI is likely used in the initial stages, the report claimed.
“Social engineering involves collecting relevant information about targeted victims and arranging it into credible lures delivered through phone, email, text, or other mediums,” it said.
“Various AI tools can quickly find, organize, and generate information, thus acting as productivity tools for cyber-attackers.”