- Red Hat to acquire Neural Magic
- Apple smart home camera and new AirPods feature rumors swirl - what to know
- Get NordVPN free for three months with this early Black Friday deal
- Transforming Hybrid Learning: How the University of Wollongong and Cisco are Revolutionising the Student Experience
- 5 Linux commands for measuring disk activity
How Bad Actors Are Learning to Hack Humans in Phishing Attacks
Phishing Attacks Continue to Grow Because Cyber Criminals Have Learned Which Psychological Buttons To Push
By Franco De Bonis, Marketing Director, VISUA
The last report issued by the APWG (Anti-Phishing Working Group) made for grim reading. In it, they reported that 316,747 phishing attacks were detected in Dec 2021; the highest monthly number in their reporting history and more than double the number of phishing attacks compared to early 2020. This figure was so troubling that we decided to do a historical deep-dive and compare the data over a number of years. You can see our full analysis here, but in short, one of the most troubling trends is the continued growth in brand spoofing, which reached an incredible peak of 715 separate brands and organizations being spoofed as of September 2021; a more than 200% increase since January 2108. Meanwhile, they have been busy spinning out phishing web pages, which are the key mechanism used to have victims give up their credentials and other personal details, or to download malicious files. In fact, this activity saw a more than 400% increase over the same period.
So it seems that bad actors are reducing the number of themes or subjects used in email attacks while targeting more brands and using more web pages to ‘convert’ recipients into victims. But why have they latched onto this specific attack methodology? The simple answer is because it works! It also helps that it’s really quick and easy. Bad actors are not lazy by any means. Actually, they have shown themselves to be immensely resourceful, but the old adage of ‘work smarter not harder’ most definitely applies here.
A study by UC Berkeley over a decade ago showed that well-designed fake sites were able to fool more than 90% of participants. With so much work having gone into staff training, one would hope that this number would have decreased significantly, and in a recent study by Canada’s Terranova Security, the number does seem to have reduced, but still remains at an alarming 20% of recipients being fooled into clicking a malicious link in a fake email or website. This means that of every ten recipients of a phishing email in an organization, two will take action that could lead to a compromise of the company systems or data. Further, according to Deloitte, 91% of all cyberattacks begin as a phishing email.
It’s clear, therefore, that bad actors understand that this is a numbers game. Many experts and pundits across the cybersecurity sector have uttered variations of the phrase ‘we have to stop phishing emails reaching every single person while they only need to fool one!’ This highlights that bad actors are using human nature and emotional factors to hack what is currently a technology-oriented protection system. They are exploiting four key elements in everyone’s life; trust, lifestyle, urgency and confidence.
Trust:
Companies work really hard and spend a lot of money to build trusting relationships with their customers. Bad actors exploit this by very closely imitating the communications of these companies. Often they will use pixel-perfect copies of existing communications, manipulated to achieve their goals. But they also work off the trust we have in individuals. An example of this is when you receive a link to an online document from a ‘colleague’, which when clicked asks you to login to your Google or Microsoft account. But this login form/page is fake, so when you login they now have your credentials. The really smart scammers will even forward on your own credentials to the real service, effectively logging you in for real, so you never even know that you’ve been phished!
Lifestyle:
Our lives are busier and more hectic than ever. We are in a multi-screen era, often on our phones while watching TV and switching between tasks all the time. We struggle to balance work and family life and this all creates stress and a FOMO (Fear of Missing Out) and FOF (Fear of Failure). Add to that our constant access to email and the web through our smartphones and you have a recipe for potential disaster as people look to action a request when they’re tired or frazzled..
Urgency:
Bad actors use this frenetic pace of life against us by adding a sense of urgency to communications. “Your account will be suspended if you don’t confirm your account details”, “Your shipment will be returned if you don’t confirm your credit card details”. Just two examples where urgency can help overcome doubt when we’re busy and/or stressed; because nobody has time to fix a lapsed account or track down a missing parcel!
Confidence (see Over-Confidence):
It is most definitely true that anti-spam and anti-phishing systems have done a great job in reducing the levels of malicious content we receive and training has helped to educate. But that can work against us. If you don’t get a lot of phishing emails you may be over-confident about the capabilities of the technology protecting you. Likewise, if you have had training on how to spot a phishing email, you may be overconfident about your ability to know if an email or web page is fake or genuine. So when one does slip through, you may well be more trusting about its authenticity than you should be.
High-Tech Solution To A Low-Tech Problem
By combining these factors with relatively simple techniques, bad actors are seeing great results that achieve their goals. But the anti-phishing industry is focused on AI that targets programmatic attack vectors. Even here bad actors have learned new ways to hide. Not only do they make use of logos and other graphical themes used by companies, but they hide text and forms as graphics too. They also use javascript to obfuscate key text strings with random letters, so ‘Login’ looks like ‘Lhkgdgowyailgqtagpibvzmen’ (I’ve coloured the actual letters for ease) until it’s rendered in a browser.
So, how do you deal with a relatively low-tech problem like brand spoofing that uses graphics as a weapon? You need to use Computer Vision (Visual-AI) to look at the email or web page, not as code, but as the user sees it – as a fully rendered page. To do this the email/page needs to be captured as a flattened jpg and then processed through the computer vision engine. This is a key step because all the tricks used by bad actors are not effective post-render, so you see what they want the victims to see.
Processing of this image is then carried out using a combination of techniques:
Visual Search: looks at the overall image and compares it to previously ‘known good’ and ‘known bad’ examples, which can give a quick confirmation of a phishing attack using a previously used design/layout.
Logo Detection: looks for brands that are often spoofed, this can lead to priority processing if it meets a potential threat profile.
Text Detection: analyzes the text looking for trigger words that could indicate a threat, like ‘username’, ‘password’, ‘credit card’, etc.
Object Detection: looks for key elements like buttons and forms, which in combination with text and logo detection increases the potential threat level.
The important thing to remember is that this approach does not replace the current programmatic methods to detect attacks, but works in concert with them to provide additional signals which can lead to more accurate determinations. There’s an interesting video on this subject here.
Ultimately, based on the volumes of phishing emails highlighted at the beginning of this article, the industry approach to controlling the flow and severity of phishing attacks is much like the old Chinese proverb of moving a mountain – one shovel-full at a time. Using many tools and solving the small issues one at a time may seem insignificant in the scheme of things, but as bad actors adapt and even simplify their approach to ramp up volumes, every 1% reduction in the number of phishing emails that make it through equates to thousands of emails blocked and pages blacklisted, which can have a critical impact on the number of compromises that succeed.
About the Author
Franco De Bonis is the Director of Marketing at VISUA, a massively scaling company in the world’s fastest-growing future-tech sector – AI.
Franco was always fascinated with technology, which led to a career marketing technology and SaaS products, and a PostGrad and Masters in Digital Marketing. Franco also set up a digital marketing agency in 2007, which grew quickly and was acquired by a national marketing chain in 2013.
Franco joined VISUA (originally LogoGrab) in September 2019. It’s a company with a vision to address the growing challenge of providing insights and intelligence from visual media using a ‘People-First’ methodology. It has grown, really big, really fast, with minimal outside investment. The VISUA brand was launched in 2020 to reflect the much broader range of solutions it now delivers to leading companies in the fields of brand monitoring, protection, and authentication.
If you want to discuss Visual-AI you can contact Franco at franco@visua.com or on LinkedIn. Find out more about Visual-AI (Computer Vision) at https://visua.com