- One of the best tablets for entertainment I've tested is not an iPad Air or Samsung Galaxy Tab
- Why you should power off your phone once a week - according to the NSA
- Trace3 names Jason Peoples as Outlier Award winner for 2024
- The evolving rate of patch management and eISSU for financials
- This monster 240W charger has features I've never seen on competing accessories - and it's on sale
AI-powered 'narrative attacks' a growing threat: 3 defense strategies for business leaders
Artificial intelligence is transforming business, from automating routine tasks and optimizing supply chains to powering sophisticated financial models and personalizing customer experiences at scale. It is enabling companies to operate with unprecedented efficiency and insight.
AI is aiding cybercriminals, too. Cyber attacks traditionally focus on exploiting technical vulnerabilities in systems and networks. Generative AI rapidly evolved and accelerated the cyber threat landscape. Today, the best bad actors quickly integrated AI-generated misinformation and disinformation into their attack strategies, leaving organizations vulnerable to a broad spectrum of attacks.
Also: Most people worry about deepfakes – and overestimate their ability to spot them
In January, AI-enabled misinformation and disinformation topped the World Economic Forum’s annual global risk report, emphasizing its potential to destabilize industry and undermine institutions. Disinformation campaigns can destabilize organizations, manipulate markets, and undermine trust in public institutions.
For example, a deepfake video of a company executive might easily trick employees, tank the stock price, or leak sensitive data. Or attackers could launch a coordinated campaign of AI-written social media posts to spread false information about a company, causing financial, reputational, and other harm.
By harnessing large language models, deepfakes, and bot networks, bad actors can now craft persuasive false narratives that exploit our biases and erode our trust, priming us for the real cyber attack to come or amplifying a previous cyber attack.
This blending of AI-powered misinformation and disinformation alongside traditional hacking techniques has given rise to a new breed of cyber threat – the narrative attack. These multi-pronged assaults weaponize manufactured stories to manipulate our perceptions, influence our behavior, and ultimately magnify the impact of technical intrusion, leaving organizations and individuals vulnerable.
Also: Businesses’ cloud security fails are ‘concerning’ – as AI threats accelerate
I spoke with Jack Rice, a defense attorney and former CIA case officer, about the dangers of misinformation and disinformation and some defense strategies for business and organization leaders.
“Misinformation and disinformation are extremely effective at manipulating people’s beliefs and behaviors because people naturally gravitate towards information that confirms their existing views,” Rice said. “The goal of those who create and spread false information is to sow division in society to gain influence and control.”
Disinformation-amplified cyber attacks unfold in key phases
Information gathering: The attack begins with extensive reconnaissance of the target organization or sector. Cybercriminals identify key vulnerabilities, pain points, and fears that they can exploit. They then carefully craft a disinformation-based narrative designed to manipulate emotions, sow confusion, and erode trust. For example, attackers might create fake news articles or social media posts claiming that a company has suffered a massive data breach, even if no such breach has occurred.
Seeding the narrative: Once the false narrative is created, attackers seed it through credible vectors like social media, blogs, or news outlets. They may use bot networks and fake accounts to amplify the spread of disinformation, making it appear more legitimate and widespread. Hacker forums on the dark web enable coordinated dissemination for maximum impact. This phase is crucial for setting the stage for the actual cyber attack.
Also: The best VPN services: Expert tested and reviewed
Launching the technical attack: Cybercriminals launch their technical attack with the target weakened by disinformation. This may involve breaching data, deploying ransomware, or stealing funds. The prior disinformation campaign often opens new attack vectors, such as phishing emails, that exploit the fear and uncertainty created by the false narrative. Victims are more likely to fall for these attacks in the wake of the disinformation.
Magnifying damage: Even after the technical attack ends, cybercriminals continue to spread disinformation to magnify the damage and sow further confusion. They may claim that the attack was more severe than it was or that the organization is covering up the extent of the damage. This ongoing disinformation campaign prolongs the reputational harm and erodes customer trust, even if the company has managed to restore its systems.
There are several notable examples of cyber attacks amplified by disinformation. In the 2021 ransomware attack on the meat supplier JBS, attackers demanded a ransom of $11 million and threatened to release sensitive stolen data. This dual-threat strategy introduced a layer of fear and urgency, exacerbating the crisis and intensifying pressure on the company to comply with the attackers’ demands. The rapid dissemination of misinformation and disinformation, which spread panic among stakeholders and the public, demonstrated how narrative manipulation can amplify the impact of a cyber attack.
A 2022 phishing campaign targeting UK charities employed compelling narratives tailored to each target, leading to an unusually high success rate. Bad actors used bespoke disinformation to enhance their phishing attempts, making them more believable. This campaign demonstrated how carefully crafted false narratives could exploit human vulnerabilities, increasing the likelihood of successful breaches and further illustrating the enhanced impact of combining cyber attacks with strategic disinformation. These examples underscore the critical need for robust defense strategies that address modern cyber threats’ technical and narrative dimensions.
Also: 7 password rules to live by in 2024, according to security experts
“While disinformation may seem like a new phenomenon, there is actually a long history of using misleading propaganda to manipulate adversaries,” Rice said. “The US government itself has engaged in many times over the years in places like Iran, Guatemala, Congo, South Vietnam, Chile, and others. It was inevitable that these same tactics would eventually be used against the US. In my work helping countries establish the rule of law and build public trust in legal institutions, I’ve seen firsthand how damaging disinformation can be.”
Narrative attack defense strategies for businesses
1. Proactively monitor for emerging threats
The first pillar of any effective defense against AI-enabled disinformation is having eyes and ears on the ground to detect early warning signs. This requires implementing a robust narrative intelligence platform that continuously monitors the public digital landscape, including social media chatter, news outlets, blogs, and forums frequented by threat actors.
Also: Did you get a fake McAfee invoice? How the scam works
The goal is to identify emerging narratives created by misinformation and disinformation in their earliest stages before they can go viral and influence public perception. Speed is critical. The sooner you spot a potential threat, the faster you can investigate it and coordinate an appropriate response.
AI-driven systems can scour hundreds of millions of data points in real time to detect anomalous activity, trending topics, and narrative patterns that could indicate a looming disinformation-enabled cyber attack. Security teams can then quickly pivot to assess the threat’s impact potential.
By leveraging machine learning and natural language processing, these monitoring solutions can surface signals that would be impossible for human analysts to detect amid the noise of the digital sphere. This allows organizations to be proactive rather than reactive in their defense posture.
2. Enhance employee awareness and training
Disinformation often exploits human vulnerabilities, so organizations must prioritize employee awareness and training. Educated employees are the first line of defense against disinformation-enabled cyberattacks.
-
Phishing awareness: Generative AI enhances this time-tested hacking tactic by helping attackers craft sophisticated and personalized phishing messages. Employees should be trained to recognize phishing attempts, especially those enhanced by disinformation. This includes understanding common tactics and being cautious about unexpected emails or messages.
-
Building a culture of awareness: It is important to create a culture where cybersecurity and critical thinking are ingrained in daily operations. At Blackbird.AI, we emphasize continuous learning and encourage our employees to stay informed about emerging threats. This proactive approach has proven effective in mitigating the risk of disinformation-enabled cyberattacks.
“Disinformation attacks people’s perceptions of the system, making them believe it is corrupt, biased, or disconnected from the people,” Rice said. “This erodes the very foundation of a cohesive order. By convincing people to believe in falsehoods, disinformation leads the general public, business leaders, and policymakers to draw incorrect conclusions and make decisions that often go against their best interests while benefiting those behind the disinformation campaigns.”
3. Foster industry-wide collaboration and intelligence sharing
Cybercriminals are notorious for collaborating to execute sophisticated hybrid attacks. The only way for defenders to level the playing field is to cooperate just as effectively by crowdsourcing threat intelligence, jointly analyzing new tactics, and coordinating responses.
Also: The best password managers
Individual businesses can adopt a few specific actions to bolster their defenses against misinformation and disinformation. Companies should invest in advanced AI-driven monitoring tools to continuously scan for false information about their brand across digital platforms. These tools can provide real-time alerts, enabling swift responses to emerging threats.
Second, establishing a dedicated task force comprising cybersecurity experts, communication specialists, and legal advisors can ensure a coordinated and practical approach to managing disinformation incidents. This team can develop and implement protocols for rapid response, public communication, and legal action when necessary.
SMBs, startups, and enterprise firms should prioritize employee training programs to educate staff on recognizing and responding to misinformation. By fostering a culture of vigilance and informed skepticism, companies can reduce the risk of internal misinformation spreading.
Also: AI is changing cybersecurity and businesses must wake up to the threat
Lastly, regular audits of digital assets and communication channels can help identify vulnerabilities and reinforce the organization’s resilience against disinformation attacks. By taking these proactive steps, businesses can protect their interests and contribute to a broader, collective effort to combat the pervasive threat of disinformation.
Experts like Rice advocate for transparency and developing cross-sector partnerships. Sharing information and mitigation strategies across sectors can help organizations stay ahead of bad actors. Financial institutions, tech companies, and media organizations can collaborate to share valuable insights and create a comprehensive view of the threat landscape. These partnerships are essential for tracking and countering disinformation that spans multiple industries, ensuring that no sector is left vulnerable.
Engaging with academic institutions is another critical strategy. Universities and research centers are at the forefront of studying disinformation tactics and developing counter-strategies. By partnering with these institutions, businesses can access cutting-edge insights and emerging technologies crucial for preventing disinformation threats. This collaboration can drive innovation and enhance the effectiveness of disinformation countermeasures.
Also: The NSA advises you to turn your phone off and back on once a week – here’s why
Integrating narrative intelligence into your cybersecurity strategy requires a multifaceted approach. It starts with deploying AI-powered tools that can continuously scan the public web, social platforms, forums, and media outlets for emerging narratives and disinformation campaigns related to your organization. These monitoring solutions use advanced natural language processing and machine learning algorithms to detect anomalous activity, trending topics, and sentiment shifts that could signal an impending attack.
AI-powered misinformation and disinformation can topple companies and shatter trust. Narrative defense is no longer optional — it’s imperative. Businesses must weave narrative intelligence into the very fabric of their cybersecurity strategies, or else.