- This new Garmin watch offers flagship features at an affordable price point - and I'm a fan
- How CISOs Can Master Operational Control Assurance — And Why It Matters
- Adobe Firefly now generates AI images with OpenAI, Google, and Flux models - how to access them
- Your TV's USB port is seriously underutilized: 5 handy features you're overlooking
- How to prevent your streaming device from tracking your viewing habits (and why it makes a difference)
Anthropic finds alarming 'emerging trends' in Claude misuse report

On Wednesday, Anthropic released a report detailing how Claude was misused during March. It revealed some surprising and novel trends in how threat actors and chatbot abuse are evolving and the increasing risks that generative AI poses, even with proper safety testing.
Security concerns
In one case, Anthropic found that a “sophisticated actor” had used Claude to help scrape leaked credentials “associated with security cameras” to access the devices, the company noted in the announcement.
Also: How a researcher with no malware-coding skills tricked AI into creating Chrome infostealers
In another case, an individual with “limited technical skills” could develop malware that normally required more expertise. Claude helped this individual take an open-source kit from doing just the basics to more advanced software functions, like facial recognition and the ability to scan the dark web.
Anthropic’s report suggested this case shows how generative AI can effectively arm less experienced actors who would not be a threat without a tool like Claude.
Also: Anthropic mapped Claude’s morality. Here’s what the chatbot values (and doesn’t)
However, the company couldn’t confirm whether the actors in both cases had successfully deployed these breaches.
Social media manipulation
In what Anthropic calls an “influence-as-a-service operation” — and the “most novel case of misuse” it found — actors used Claude to generate content for social media, including images. The operation also directed how and when over a hundred bots on X and Facebook would engage with posts from tens of thousands of human accounts through commenting, liking, and sharing.
“Claude was used as an orchestrator deciding what actions social media bot accounts should take based on politically motivated personas,” the report states, clarifying that whoever was behind the operation was being paid to push their clients’ political agendas. The accounts spanned several countries and languages, indicating a global operation. Anthropic added that this engagement layer was an evolution from earlier influence campaigns.
“These political narratives are consistent with what we expect from state affiliated campaigns,” said the company in its release, though it could not confirm that suspicion.
Also: Project Liberty’s plan to decentralize TikTok could be the blueprint for a better internet
This development is significant because the user could create a semi-autonomous system with Claude. Anthropic expects this type of misuse to continue as agent AI systems evolve.
Recruitment fraud
Anthropic also discovered a social engineering recruitment scheme across Eastern Europe that used Claude to make the language of the scam more convincingly professional, or what’s called “language sanitation.” Specifically, these actors had Claude launder their original, non-native English text to appear as if written by a native speaker so that they could better pose as hiring managers.
Protecting against misuse
“Our intelligence program is meant to be a safety net by both finding harms not caught by our standard scaled detection and to add context in how bad actors are using our models maliciously,” Anthropic said about its process. After analyzing conversations to find overall misuse patterns and specific cases, the company banned the accounts behind them.
“These examples were selected because they clearly illustrate emerging trends in how malicious actors are adapting to and leveraging frontier AI models,” Anthropic said in the announcement. “We hope to contribute to a broader understanding of the evolving threat landscape and help the wider AI ecosystem develop more robust safeguards.”
Also: Is that image real or AI? Now Adobe’s got an app for that – here’s how to use it
The report followed news from inside OpenAI that the company had dramatically shortened model testing timelines. Pre- and post-deployment testing for new AI models is essential for mitigating the harm they can cause in the wrong hands. The fact that Anthropic — a company known in the AI space for its commitment to testing and overall caution — found these use cases after objectively more conservative testing than competitors is significant.
As federal AI regulation remains unclear under the Trump administration, self-reporting and third-party testing are the only safeguards for monitoring generative AI.