- How to Become a Chief Information Officer: CIO Cheat Sheet
- 3 handy upgrades in MacOS 15.1 - especially if AI isn't your thing (like me)
- Your Android device is vulnerable to attack and Google's fix is imminent
- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
Abnormal Security: Microsoft Tops List of Most-Impersonated Brands in Phishing Exploits
A significant portion of social engineering attacks, such as phishing, involve cloaking a metaphorical wolf in sheep’s clothing. According to a new study by Abnormal Security, which looked at brand impersonation and credential phishing trends in the first half of 2023, Microsoft was the brand most abused as camouflage in phishing exploits.
Of the 350 brands spoofed in phishing attempts that were blocked by Abnormal, Microsoft’s name was used in 4.31% — approximately 650,000 — of them. According to the report, attackers favor Microsoft because of the potential to move laterally through an organization’s Microsoft environments.
Abnormal’s threat unit also tracked how generative AI is increasingly being used to build social engineering attacks. The study examines how AI tools make it far easier and faster for attackers to craft convincing phishing emails, spoof websites and write malicious code.
Jump to:
Top 10 brands impersonated in phishing attacks
If 4.31% seems like a small figure, Abnormal Security CISO Mike Britton pointed out that it is still four times the impersonation volume of the second most-spoofed brand, PayPal, which was impersonated in 1.05% of the attacks Abnormal tracked. Following Microsoft and PayPal in a long tail of impersonated brands in 2023 were:
- Microsoft: 4.31%
- PayPal: 1.05%
- Facebook: 0.68%
- DocuSign: 0.48%
- Intuit: 0.39%
- DHL: 0.34%
- McAfee: 0.32%
- Google: 0.30%
- Amazon: 0.27%
- Oracle: 0.21%
Best Buy, American Express, Netflix, Adobe and Walmart are some of the other impersonated brands among the list of 350 companies used in credential phishing and other social engineering attacks Abnormal flagged over the past year.
Attackers increasingly rely on generative AI
One aspect of brand impersonation is the ability to mimic the brand tone, language and imagery, something that Abnormal’s report shows phishing actors are doing more of thanks to easy access to generative AI tools. Generative AI chatbots allow threat actors to create not only effective emails but picture perfect faux-branded websites replete with brand-consistent images, logos and copy in order to lure victims into entering their network credentials.
For example, Britton, who authored the report, wrote that Abnormal discovered an attack using generative AI to impersonate the logistics company DHL. To steal the target’s credit card information, the sham email asked the victim to click a link to pay a delivery fee for “unpaid customs duties (Figure A).”
Figure A
How Abnormal is dusting generative AI fingerprints in phishing emails?
Britton explained to TechRepublic that Abnormal tracks AI with its recently launched CheckGPT, an internal, post-detection tool that helps determine when email threats — including phishing emails and other socially-engineered attacks — have likely been created using generative AI tools.
“CheckGPT leverages a suite of open source large language models to analyze how likely it is that a generative AI model created the email message,” he said. “The system first analyzes the likelihood that each word in the message has been generated by an AI model, given the context that precedes it. If the likelihood is consistently high, it’s a strong potential indicator that text was generated by AI.”
Attackers use generative AI for credential theft
Britton said attackers’ use of AI includes crafting credential phishing, business email compromises and vendor fraud attacks. While AI tools can be used to create impersonated websites as well, “these are typically supplemental to email as the primary attack mechanism,” he said. “We’re already seeing these AI attacks play out — Abnormal recently released research showing a number of emails that contained language strongly suspected to be AI-generated, including BEC and credential phishing attacks.” He noted that AI can fix the dead giveaways: typos and egregious grammatical errors.
“Also, imagine if threat actors were to input snippets of their victim’s email history or LinkedIn profile content within their ChatGPT queries. This brings highly personalized context, tone and language into the picture — making BEC emails even more deceptive,” Britton added.
SEE: AI vs AI: the next front in the phishing wars (TechRepublic)
How hard is it to build effective email exploits with AI? Not very. Late in 2022, researchers at Tel Aviv-based Check Point demonstrated how generative AI could be used to create viable phishing content, write malicious code in Visual Basic for Applications and macros for Office documents, and even produce code for reverse shell operations (Figure B).
Figure B
They also published examples of threat actors using ChatGPT in the wild to produce infostealers and encryption tools (Figure C).
Figure C
How credential-focused phishing attacks lead to BECs
Britton wrote that credential phishing attacks are pernicious partly because they are the first step in an attacker’s lateral journey toward achieving network persistence, which is an offender’s ability to take up parasitic, unseen residence within an organization. He noted that when attackers gain access to Microsoft credentials, for example, they can enter the Microsoft 365 enterprise environment to hack Outlook or SharePoint and do further BECs and vendor fraud attacks.
“Credential phishing attacks are particularly harmful because they are typically the first step in a much more malicious campaign,” wrote Britton.
Because persistent threat actors can pretend to be legitimate network users, they can also perform thread hijacking, where attackers insert themselves into an existing enterprise email conversation. These tactics let actors insert themselves into email strings and hijack them to launch further phishing exploits, monitor emails, learn the organizational command chain and target those who, for example, authorize wire transfers.
“When attackers gain access to banking credentials, they can access the bank account and move funds from their victim’s account to one they own,” noted Britton. With stolen social media account credentials gained through phishing exploits, he said attackers can use the personal information contained in the account to extort victims into paying money to keep their data private.
BECs on the rise, along with sophistication of email attacks
Britton noted that successful BEC exploits are a key means for attackers to steal credentials from a target via social engineering. Unfortunately, BECs are on the rise, continuing a five-year trend, according to Abnormal. Microsoft Threat Intelligence reported that it detected 35 million business email compromise attempts, with an average of 156,000 attempts daily between April 2022 and April 2023.
Splunk’s 2023 State of Security report, based on a global survey of 1,520 security and IT leaders who spend half or more of their time on security issues, found that over the past two years, 51% of incidents reported were BECs — a nearly 10% increase vs. 2021 — followed by ransomware attacks and website impersonations.
Also increasing is the sophistication of email attacks, including the use of financial supply chain compromise, in which attackers impersonate a target organization’s vendors to, for example, request that invoices be paid, a phenomenon Abnormal reported on early this year.
SEE: New phishing and BECs increase in complexity, bypass MFA (TechRepublic)
If not dead giveaways, strong warning signs of phishing
The Abnormal report suggested that organizations should be on the lookout for emails from a roster of often-spoofed brands that include:
- Persuasive warnings about the potential of losing account access.
- Fake alerts about fraudulent activity.
- Demands to sign in via the provided link.