- I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off
- Google's viral research assistant just got its own app - here's how it can help you
- Sony will give you a free 55-inch 4K TV right now - but this is the last day to qualify
- I've used virtually every Linux distro, but this one has a fresh perspective
- The 7 gadgets I never travel without (and why they make such a big difference)
Anthropic's new AI models for classified info are already in use by US gov

Roughly five weeks before President Donald Trump announces his administration’s AI policy, major AI companies are continuing to strengthen their ties to government — specifically for national security customers.
On Thursday, Anthropic announced Claude Gov, a family of models “exclusively for US national security customers,” it said in the release. Claude Gov is intended for everything from operations to intelligence and threat analysis, and is designed to interpret classified documents and defense contexts. It also features improved language and dialect proficiency as well as enhanced interpretation of cybersecurity data.
Also: Anthropic’s popular Claude Code AI tool now included in its $20/month Pro plan
“The models are already deployed by agencies at the highest level of US national security, and access to these models is limited to those who operate in such classified environments,” the company said.
Developed with feedback from Anthropic’s government users, the company said Claude Gov has undergone its standard of safety testing (which the company applies to all Claude models). In the release, Anthropic reiterated its commitment to safe, responsible AI, assuring that Claude Gov is no different.
The announcement comes several months after OpenAI released ChatGPT Gov in January of this year, indicating a broader public shift in major AI labs to explicitly serve government use cases with fine-tuned products. The National Guard already uses Google AI to improve its disaster response.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Prior to the Trump administration, military and other defense-related contracts between AI companies and the US government weren’t nearly as publicized, especially amidst shifting usage guidelines at companies like OpenAI, which had originally vowed not to engage in weapons creation. Google has recently gone through similar guideline revisions, despite maintaining claims to responsible AI.
Also: Anthropic mapped Claude’s morality. Here’s what the chatbot values (and doesn’t)
The increasing ties between the government and AI companies are taking place in the larger context of the Trump administration’s impending AI Action Plan, slated for July 19. Since Trump took office, AI companies have adjusted Biden-era responsibility commitments, originally made with the US AI Safety Institute, as Trump rolled back any regulation Biden put in place. OpenAI has been advocating for less regulation in exchange for government access to models. Alongside Anthropic, it has also cozied up to the government in new ways, including through scientific partnerships and the $500 billion Stargate initiative, all while the Trump administration cuts staff and funding for AI and other science-related initiatives within the US government.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.