- VMware Product Release Tracker (vTracker)
- VMware Product Release Tracker (vTracker)
- VMware Product Release Tracker (vTracker)
- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
Tips for Detecting and Preventing Multi-Channel Impersonation Attacks
Recently, the CEO of the world’s biggest advertising group, Mark Read, was the target of a deepfake scam using an AI-based voice clone. Read disclosed that scammers used a publicly available photo to create a fake WhatsApp account under his name, and that account was used to arrange a Microsoft Teams call between one of his agency heads, a senior executive, and the scammers. Once inside the Teams meeting, a voice clone with YouTube footage of the other executive was used, while the scammers impersonated him off-camera using the chat function.
While the scam was unsuccessful this time, deepfakes and impersonation attempts are becoming increasingly common and sophisticated, according to the Identity Theft Resource Center. Unfortunately, this example is among the most common applications of deepfakes, along with fraudulent videos of celebrities, politicians, or other public figures, that can spread misinformation, damage reputations, or incite conflicts.
In these instances, expertise becomes crucial in swiftly identifying and mitigating the threat. Deepening your understanding of the imposters and deploying effective countermeasures is imperative for maintaining a company’s integrity in the digital landscape. Recognizing fraud and responding effectively to deceitful accounts is critical in shielding executives from harm and protecting the organization from potential reputational and financial repercussions.
As technology evolves and improves, detecting deepfakes will become increasingly difficult. But there is hope on the horizon, as AI can also be used for good, to build up defensive postures, and assist in flagging scams before they become problems. Keep reading to learn more about outsmarting the scammers.
Unmasked: Recognizing scammers to stop being victimized
The best way for executives to avoid becoming victims is to detect threats before they cause financial or data loss and damage. The incident mentioned earlier underscores just how simple and easy it is for attackers to set up fake profiles on multiple channels, including LinkedIn, Telegram, WhatsApp, and social media platforms, to establish legitimacy before contacting unsuspecting employees or partners to carry out their scam.
These impersonators build detailed profiles using publicly available information, including real photos of individuals and personal details, even mimicking their unique speaking style and tone, all lending greater legitimacy. Therefore, protecting against fake accounts and social media impersonation requires a multi-faceted approach, beyond just enforcing unique passwords.
The first step in defending against impersonation attacks is recognizing a fraudulent profile. Through close examination, impersonation accounts can display subtle, but telling anomalies compared to authentic profiles. For example, profile pictures may look generic, stock-like, or unnatural; bios may be too vague or oddly formal for social media; and often, account creation dates appear very recent. These clues will often give away imposter profiles, which are engineered for malicious activities, like phishing scams, installing malware, and orchestrating broader cyberattacks. Train employees to create a routine of scrutinizing profiles for completeness and authenticity. Encourage them to explore the digital footprint of suspicious accounts and cross-reference what they find with other public information where possible. Genuine accounts usually have a consistent history of posts and interactions, unlike fake accounts which may show minimal activity.
Secondly, it is much more difficult to fake your personal and professional connections. It is reasonable to expect that an accomplished executive will have a long list of contacts, current and former associates, customers, and friends following their profiles. So, examine followers and friends or connections lists to identify potential imbalances in the ratio of followers to following. Fake accounts will often follow many but conversely are only followed by a few. Such accounts may also follow a pattern of targeting high-profile or similar accounts disproportionately. You can also use analytics tools to assess the ratio of followers to following, and these tools can help visualize patterns to quickly flag accounts that deviate from the norm, especially those that target high-profile figures disproportionately.
Look and Listen: Using patterns and behaviors to spot scams
Moving beyond social connections, it’s important to scrutinize a profile’s content for authenticity. This involves evaluating the relevance and quality of what the user has posted. Imposter accounts might share spammy or irrelevant material that is often filled with suspicious links or promotional content that doesn’t align with the genuine persona or identity of the account. To defend against this threat, you can set alerts for keywords associated with spammy or promotional content within your network. This proactive measure will help to quickly identify and investigate accounts that are frequently using such terms inappropriately.
Once you have sorted out any red flags due to content, the next step is to employ social listening tools to track and analyze the profile’s engagement patterns over time. Look for anomalies such as sudden spikes in likes, comments, or shares, which could all indicate the use of automation or coordinated inauthentic behavior. Fake accounts will typically display abnormal engagement patterns aimed at fabricating authenticity. Monitoring these patterns can help identify and flag imposters.
Finally, utilizing third-party tools that use algorithms and machine learning to continuously monitor and analyze account behavior can significantly aid in detecting and efficiently blocking fake accounts. Using research and reputable third-party services that offer comprehensive monitoring and analysis features can ensure integration with existing security systems for seamless detection and response to fake accounts.
Automate: Strengthen cyber defenses to mitigate threats
The threat of imposter accounts on social media is real and ever evolving. To counter these attacks at scale, leveraging AI and machine learning can also be a powerful defense mechanism to proactively detect and remediate threats. Additionally, adopting the above strategies can protect organizations, their employees, and their online communities from the costly consequences of these fraudulent entities.
Sometimes, you can further insulate the organization from threats by building strong associations with reputable security thought leaders and communities. Build your awareness of the latest trends and tactics employed by malicious actors by subscribing to cybersecurity newsletters, or through participation in webinars and workshops to update your knowledge and skills, so you can recognize emerging threats built on impersonation schemes.
About the Author
Abhilash Garimella is the Head of Research at Bolster AI where he leads both the threat intelligence and SOC team to detect and take down digital threats. Abhilash has a master’s in computer engineering and deep learning, and his work covers cybersecurity, online fraud detection, threat hunting, and applied machine learning. Prior to Bolster, Abhilash conducted threat research at McAfee and was the original scientist at Bolster in developing models for automated threat detection and response. Follow Abhilash on LinkedIn and at Bolster’s blog https://www.bolster.ai/