- Samsung confirms it's working with Google to develop AR glasses
- How to preorder the Samsung Galaxy S25 series - and the best deals I found
- Explore the Future of Naval Communications and Security with Cisco at AFCEA West
- 4 useful Galaxy S25 Ultra features that creatives and power users will love
- Expanding the Foundation of AI-Native SOCs: Mastering Holistic Data Integration
Securing Election Integrity In 2024: Navigating the Complex Landscape of Modern Threats
As we navigate the 2024 election year, safeguarding the integrity of our democratic process is more critical than ever. While much attention has been focused on securing ballot machines, the real threats extend far beyond the physical infrastructure. Misinformation, cyberattacks, and the rise of generative AI technologies like deepfakes present significant challenges.
Between June 18 and July 12, the Trustwave SpiderLabs team received and analyzed more than 5,000 emails containing political subject matter coming from secure email gateway cloud submissions and spam trap collections. This included samples from both political parties of Democrats and Republicans, ranging from supportive to scathing. Topics included the introduction and promotion of candidates, campaign updates, derogatory remarks towards the opposition, and conspiracy theories.
Despite the differing points of view of these email senders, two things are constant in these messages: the call for monetary donation and the usage of propaganda techniques. Clearly, threat actors will leverage any and all vectors to incite public opinion toward or away from their desired target. Understanding these tactics, mitigating their associated risks, and implementing proactive measures are all essential for voters, campaign workers, and media professionals alike.
Misinformation: The Invisible Enemy
Misinformation has become a pervasive threat in our digital age. With the organic nature of social media, biased algorithms, and the rapid spread of fake news, misinformation can easily influence public opinion. Social media platforms, despite their efforts to combat false information, remain a primary vehicle for the spread of misleading content. As we head into the heart of the election season, the potential for misinformation to shape voter perceptions and decisions is at an all-time high.
Key issues like healthcare, the economy, and education are particularly vulnerable to manipulation. Misleading narratives can be crafted to exploit voter fears and biases, swaying public opinion and potentially altering the outcome of the election. Imagine, for example, scrolling past a headline or video of a presidential candidate proclaiming their intent to put an end to a widely endorsed healthcare policy. If this does not align with their platform and is not realistic, it may still be believable enough not to spark a second thought in unsuspecting viewers. Without proper verification, this could spread rapidly across myriad platforms—its presence on Facebook, Instagram and X instantaneously and simultaneously would only serve to further validate it, no matter whether it is legitimate or not.
It is more crucial than ever for voters to critically evaluate the information they encounter and rely on reputable sources for their news. Training them to cross-check information with multiple credible sources can greatly reduce the spread of false information. It should always be noted that it is important to vet anything seen on social media against reporting from legitimized media outlets.
The Digital Battlefield
In addition to misinformation, cyberattacks pose a significant threat to election security. The introduction of generative AI has only inflamed this threat.
State-sponsored actors and independent hackers alike have demonstrated their ability to disrupt electoral processes through various means. From hacking into voter databases to launching denial-of-service attacks (DoS) on critical infrastructure, the tactics used in cyber warfare are diverse and constantly evolving.
Recent years have seen a rise in ransomware attacks targeting local government systems, including those responsible for managing elections. These attacks can lead to the theft of sensitive voter information, disruptions in the voting process, and a general erosion of public trust in the electoral system. Not only do these attacks have the potential to spread fake news, but they also enable blackmail and be leveraged as a tactic in advanced phishing campaigns. For example, a campaign email could carry a malicious link to urge voters to click on a link to view the candidate’s recent speech. Especially if the threat actor is leveraging AI, that link or any accompanying image could very well be realistic enough for the common citizen to click on it, exposing them to malware.
Mitigating phishing or malware threats may sometimes be left up to the individual on the receiving end, but strengthening cybersecurity measures at all levels of government is also essential to these risks—particularly those borne out of the proliferation of AI. To combat the misuse of AI and the threat of automated cyberattacks, several nations are developing or rolling out protective legislation. In the US, the Federal Artificial Intelligence Risk Management Act of 2023 directs federal agencies to follow guidelines for managing AI-related risks. States like California and New York are also enacting laws to regulate AI systems and ensure ethical conduct.
Deepfakes and the New Frontier of Deception
Among the many threats to election security, deepfakes represent a particularly concerning development. These AI-generated videos can depict individuals saying or doing things they never did, creating highly realistic but entirely false narratives. As technology advances, deepfakes become increasingly difficult to detect, posing a significant challenge for both the public and media professionals.
The ease of creating deepfakes has lowered the barriers for malicious actors. Freely available apps and user-friendly software mean that virtually anyone can generate a convincing deepfake. This democratization of technology makes widespread misinformation more plausible than ever before. Malicious actors can produce and disseminate deepfakes quickly and in large volumes, flooding social media with fake content designed to influence voter decisions on key issues.
Deepfakes can even be tailored to exploit the fears and biases of specific demographic groups, potentially swaying public opinion against a candidate. Because deepfakes are so difficult to spot and often play on voters’ deepest fears, it’s essential for everyone to stay vigilant. The news media plays a crucial role in verifying information, and campaign organizations can also create awareness by urging the public and tech companies to review and filter unverified videos.
The average person must also bear a certain amount of responsibility for vetting campaign ads, videos, and other media they encounter. Similar to how, in traditional cybersecurity, everyone is responsible for identifying phishing scams, it is just as necessary that every voter question the authenticity of the photo and video media they see.
Detection and Prevention
Despite the sophisticated nature of these threats, there are measures that can be taken to combat them. For misinformation and fake news, media literacy campaigns and public awareness initiatives are crucial. Voters need to be educated on how to identify false information and encouraged to verify the credibility of their news sources. Social media platforms must also continue to improve their algorithms to detect and remove misleading content more effectively.
In the realm of cybersecurity, government agencies and private organizations must collaborate to enhance the security of election infrastructure. Regular security audits, robust encryption methods, and comprehensive incident response plans are vital components of a resilient electoral system. Additionally, investing in advanced threat detection technologies can help identify and mitigate cyber threats before they cause significant damage.
When it comes to deepfakes, the development of sophisticated detection tools is paramount. AI-driven solutions can analyze videos for signs of manipulation, such as inconsistencies in lighting, shadows, and facial movements. Public awareness campaigns should also be launched to inform voters about the existence of deepfakes and provide guidance on how to recognize them. Practical tools are being developed, leveraging machine learning to analyze videos for signs of manipulation. Some popular tools include Intel’s FakeCatcher, Microsoft Video AI Authenticator and Deepware.
This Election Year
As we move through the 2024 election year, the integrity of our democratic process is under unprecedented threat. Security leaders should continue to advocate for and support legislation that regulates the use of AI and imposes penalties for the creation and distribution of malicious deepfakes and misinformation. Encouraging international cooperation on AI regulation and targeted, politicized cyber threats can also help create a unified approach, and general rules of thumb, to shoring up election security.
It is imperative that voters, campaign workers, and media professionals remain vigilant and informed about these threats. By doing so, we can collectively work towards a more secure and transparent electoral process, ensuring that the voice of the people is accurately represented in the outcome of the 2024 election.
About the Author
Karl Sigler is a Security Research Manager at Trustwave SpiderLabs where he is responsible for research and analysis of current vulnerabilities, malware and threat trends. Karl and his team run the Trustwave SpiderLabs Threat Intelligence database, maintaining security feeds from internal research departments and third-party threat exchange programs. His team also serves as liaison for the Microsoft MAPP program, coordinates Trustwave SpiderLabs responsible vulnerability disclosure process and maintains the IDS/IPS signature set for their MSS customers. With more than 20 years’ experience working in information security, Karl has presented on topics like Intrusion Analysis, Pen Testing and Computer Forensics to audiences in over 30 countries. Karl can be reached online at https://www.linkedin.com/in/ksigler/ and at our company website www.trustwave.com.