Biden-Harris Administration Secures AI Commitments For Safety


The Biden-Harris Administration has taken a new step towards ensuring the responsible development of artificial intelligence (AI) technology by securing voluntary commitments from leading AI companies. 

As part of the new initiative, Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI have pledged to prioritize safety, security and trust in their AI systems.

To protect Americans’ rights and safety, the companies have committed to several fundamental principles. They will conduct internal and external security testing of AI systems before release, with the help of independent experts, to guard against biosecurity, cybersecurity and broader societal risks.

Additionally, these companies will share information on managing AI risks with the industry, governments, civil society and academia, fostering collaboration and best practices. They will invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, releasing them only when intended and secure.

To earn the public’s trust, the companies will also develop technical mechanisms like watermarking to indicate AI-generated content, reducing fraud risks. They will also report on AI systems’ capabilities, limitations, and appropriate and inappropriate use, covering security and societal risks, including fairness and bias.

Read more on AI and security: Google Launches Framework to Secure Generative AI

“With thoughtful regulation, ethical development, collaboration, and advanced security practices, it is possible to harness the potential of AI for defensive cybersecurity purposes while minimizing its exploitation for malicious activities,” said Dave Randleman, field CISO, application security & ethical hacking at Coalfire.

“Effective regulation can empower cybersecurity professionals to use AI as a powerful tool in defending against cyber-threats and ensuring a safer digital environment.”

The Biden-Harris Administration said it is committed to ensuring America’s leadership in responsible AI innovation. Alongside developing an executive order, the administration plans to pursue bipartisan legislation to govern AI development safely. 

The administration is also engaging with international allies and partners to establish a robust global framework for AI governance.

“From a cybersecurity perspective, the push for guidelines might be effective for commercial interests but will do nothing to stop threat actors from using the technology to their advantage,” commented Mike Parkin, senior technical engineer at Vulcan Cyber.

“A voluntary system can work within the scope of ‘common usage,’ but it will only work for organizations that operate legitimately.”

The new commitments come days after the Biden-Harris Administration announced on July 18 the launch of the “US Cyber Trust Mark” program.



Source link