- CISA Adds Four Vulnerabilities to Catalog for Federal Enterprise
- ARK Invest's Big Ideas 2025: AI agents will significantly improve employee productivity
- How to make ChatGPT provide better sources and citations
- Firefox expands access to popular AI chatbots right from the sidebar
- This prism-shaped power bank I tested looks odd, but it makes so much sense
Google releases responsible AI report while removing its anti-weapons pledge
The most notable part of Google’s latest responsible AI report could be what it doesn’t mention. (Spoiler: No word on weapons and surveillance.)
On Tuesday, Google released its sixth annual Responsible AI Progress Report, which details “methods for governing, mapping, measuring, and managing AI risks,” in addition to “updates on how we’re operationalizing responsible AI innovation across Google.”
Also: Deepseek’s AI model proves easy to jailbreak – and worse
In the report, Google points to the many safety research papers it published in 2024 (more than 300), AI education and training spending ($120 million), and various governance benchmarks, including its Cloud AI receiving a “mature” readiness rating from the National Institute of Standards and Technology (NIST) Risk Management framework.
The report focuses largely on security- and content-focused red-teaming, diving deeper into projects like Gemini, AlphaFold, and Gemma, and how the company safeguards models from generating or surfacing harmful content. It also touts provenance tools like SynthID — a content-watermarking tool designed to better track AI-generated misinformation that Google has open-sourced — as part of this responsibility narrative.
Google also updated its Frontier Safety Framework, adding new security recommendations, misuse mitigation procedures, and “deceptive alignment risk,” which addresses “the risk of an autonomous system deliberately undermining human control.” Alignment faking, or the process of an AI system deceiving its creators to maintain autonomy, has recently been noted in models like OpenAI o1 and Claude 3 Opus.
Also: Anthropic’s Claude 3 Opus disobeyed its creators – but not for the reasons you’re thinking
Overall, the report sticks to end-user safety, data privacy, and security, remaining within that somewhat walled garden of consumer AI. While the report contains scattered mentions of protecting against misuse, cyber attacks, and the weight of building artificial general intelligence (AGI), those also stay largely in this ecosystem.
That’s notable given that, at the same time, the company removed from its website its pledge not to use AI to build weapons or surveil citizens, as Bloomberg reported. The section titled “applications we will not pursue,” which Bloomberg reports was visible as of last week, appears to have been removed.
That disconnect — between the report’s consumer focus and the removal of the weapons and surveillance pledge – does highlight the perennial question: What is responsible AI?
As part of the report announcement, Google said it had renewed its AI principles around “three core tenets” — bold innovation, collaborative progress, and responsible development and deployment. The updated AI principles refer to responsible deployment as aligning with “user goals, social responsibility, and widely accepted principles of international law and human rights” — which seems vague enough to permit reevaluating weapons use cases without appearing to contradict its own guidance.
Also: Why Mark Zuckerberg wants to redefine open source so badly
“We will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise,” the blog notes, “always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks.”
The shift adds a tile to the slowly growing mosaic of tech giants shifting their attitudes towards military applications of AI. Last week, OpenAI moved further into national security infrastructure through a partnership with US National Laboratories, after partnering with defense contractor Anduril late last year. In April 2024, Microsoft pitched DALL-E to the Department of Defense, but OpenAI maintained a no-weapons-development stance at the time.