- 8 ways I use Microsoft's Copilot Vision AI to save time on my phone and PC
- I prefer this OnePlus tablet over iPads for mobile entertainment - and it's on sale
- I found AirTag alternatives that are tough, loud, and compatible with Android phones
- The best AI for coding in 2025 (including two new top picks - and what not to use)
- 3 Apple devices you definitely shouldn't buy this month (and 9 to get instead)
Claude Chatbot Used for Automated Political Messaging

A politically motivated influence campaign and a series of cybercrime cases powered by AI have been detailed by Anthropic in a new report.
The AI company found that its Claude chatbot was used by threat actors to automate political messaging, manage fake social media personas and support other malicious activities.
What’s new is that Claude was not only used to generate content but also to decide how and when fake accounts should engage with real users. This included commenting, liking and sharing posts based on specific political objectives.
Anthropic said more than 100 AI-driven personas were created to interact with tens of thousands of authentic accounts across Facebook and X.
“The operation engaged with tens of thousands of authentic social media accounts,” the company said.
“No content achieved viral status. However, the actor strategically focused on sustained long-term engagement promoting moderate political perspectives rather than pursuing virality.”
The campaign pushed narratives that were favorable to countries including the UAE, Iran, Kenya and several European nations.
It was structured using a programmatic framework that allowed consistent behavior across accounts, making bots appear more human and harder to detect.
In addition to political influence, Anthropic reported misuse of Claude in other areas:
- A credential-stuffing scheme that targeted internet-connected security cameras
- A recruitment scam aimed at job seekers in Eastern Europe
- A low-skill actor using Claude to build advanced malware, including dark web scanning tools and persistent access systems
Anthropic has since banned the accounts involved but warned that such abuse reflects a broader trend.
As generative AI lowers the barrier to entry, more actors – state-linked or otherwise – can launch sophisticated digital operations with minimal resources.
The company called for stronger safeguards and industry collaboration to prevent future misuse of frontier AI models.
Image credit: Koshiro K / Shutterstock.com