- 5 easy Gemini settings tweaks to protect your privacy from AI
- Startup Fractile tackles AI inference bottlenecks with new chip design
- Trump axes AI staff and research funding, and scientists are worried
- Private 5G Networks Face Security Risks Amid AI Adoption
- 78% of CISOs are experiencing impact from from AI cyber threats
Open AI, Anthropic invite US scientists to experiment with frontier models

Partnerships between AI companies and the US government are expanding, even as the future of AI safety and regulation remains unclear.
On Friday, Anthropic, OpenAI, and other AI companies brought 1,000 scientists together to test their latest models. The event, hosted by OpenAI and called an AI Jam Session, gave scientists across nine labs a day to use several models — including OpenAI’s o3-mini and Claude 3.7 Sonnet, Anthropic’s latest release — to advance their research.
Also: OpenAI finally unveils GPT-4.5. Here’s what it can do
In its own announcement, Anthropic said the session “offers a more authentic assessment of AI’s potential to manage the complexities and nuances of scientific inquiry, as well as evaluate AI’s ability to solve complex scientific challenges that typically require significant time and resources.”
The AI Jam Session is part of existing agreements between the US government, Anthropic, and OpenAI. In April, Anthropic partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to red-team Claude 3 Sonnet, testing whether it would reveal dangerous nuclear information. On January 30, OpenAI announced it was partnering with the DOE National Laboratories to “supercharge their scientific research using our latest reasoning models.”
The National Labs, a network of 17 scientific research and testing sites spread across the country, investigate topics from nuclear security to climate change solutions.
Participating scientists were also invited to evaluate the models’ responses and give the companies “feedback to improve future AI systems so that they are built with scientists’ needs in mind,” OpenAI said in its announcement for the event. The company noted that it would share findings from the session on how scientists can better leverage AI models.
Also: Everything you need to know about Alexa+, Amazon’s new generative AI assistant
In the announcement, OpenAI included a statement from secretary of energy Chris Wright that likened AI development to the Manhattan Project as the country’s next “patriotic effort” in science and technology.
OpenAI’s broader partnership with the National Labs aims to accelerate and diversify disease treatment and prevention, improve cyber and nuclear security, explore renewable energies, and advance physics research. The AI Jam Session and National Labs partnership comes alongside several other initiatives between private AI firms and the government, including ChatGPT Gov, OpenAI’s tailored chatbot for local, state, and federal agencies, and Project Stargate, a $500 billion data center investment plan.
These agreements offer clues as to how the US AI strategy is de-emphasizing safety and regulation under the Trump administration. Though they have yet to land, staff cuts at the AI Safety Institute, part of DOGE’s broader firings, have been rumored for weeks, and the head of the Institute has already stepped down. The current administration’s AI Action Plan has yet to be announced, leaving the future of AI oversight in limbo.
Also: The head of US AI safety has stepped down. What now?
Partnerships like these, which put the latest developments in AI directly in the hands of government initiatives, could become more common as the Trump administration works more closely with AI companies and deprioritizes third-party watchdog involvement. The risk is even less oversight into how powerful and safe new models are — regulation is already nascent in the US — as deployment quickens.