- I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off
- Google's viral research assistant just got its own app - here's how it can help you
- Sony will give you a free 55-inch 4K TV right now - but this is the last day to qualify
- I've used virtually every Linux distro, but this one has a fresh perspective
- The 7 gadgets I never travel without (and why they make such a big difference)
How global threat actors are weaponizing AI now, according to OpenAI

As generative AI has spread in recent years, so too have fears over the technology’s misuse and abuse.
Tools like ChatGPT can produce realistic text, images, video, and speech. The developers behind these systems promise productivity gains for businesses and enhanced human creativity, while many safety experts and policy-makers worry about the impending surge of misinformation, among other dangers, that these systems enable.
Also: What AI pioneer Yoshua Bengio is doing next to make AI safer
OpenAI — arguably the leader in this ongoing AI race — publishes an annual report highlighting the myriad ways in which its AI systems are being used by bad actors. “AI investigations are an evolving discipline,” the company wrote in the latest version of its report, released Thursday. “Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses.”
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The new report detailed 10 examples of abuse from the past year, four of which appear to be coming from China.
What the report found
In each of the 10 cases outlined in the new report, OpenAI outlined how it detected and addressed the problem.
One of the cases with probable Chinese origins, for example, found ChatGPT accounts generating social media posts in English, Chinese, and Urdu. A “main account” would publish a post, then others would follow with comments, all of which were designed to create an illusion of authentic human engagement and attract attention around politically charged topics.
According to the report, those topics — including Taiwan and the dismantling of USAID — are “all closely aligned with China’s geostrategic interests.”
Also: AI bots scraping your data? This free tool gives those pesky crawlers the run-around
Another example of abuse, which according to OpenAI had direct links to China, involved using ChatGPT to engage in nefarious cyber activities, like password “bruteforcing”– trying a huge number of AI-generated passwords in an attempt to break into online accounts — and researching publicly available records regarding the US military and defense industry.
China’s foreign ministry has denied any involvement with the activities outlined in OpenAI’s report, according to Reuters.
Other threatening uses of AI outlined in the new report were allegedly linked to actors in Russia, Iran, Cambodia, and elsewhere.
Cat and mouse
Text-generating models like ChatGPT are likely to be just the beginning of AI’s specter of misinformation.
Text-to-video models, like Google’s Veo 3, can increasingly generate realistic video from natural language prompts. Text-to-speech models, meanwhile, like ElevenLabs’ new v3, can generate humanlike voices with similar ease.
Also: Text-to-speech with feeling – this new AI model does everything but shed a tear
Though developers generally implement some kind of guardrails before deploying their models, bad actors — as OpenAI’s new report makes clear — are becoming ever more creative in their misuse and abuse. The two parties are locked in a game of cat and mouse, especially as there are currently no robust federal oversight policies in place in the US.
Want more stories about AI? Sign up for Innovation, our weekly newsletter.