Big tech companies commit to new safety practices for AI

Comparison to other safety commitments

The agreement follows a similar landmark agreement between the EU, the US, China, and other countries to work together on AI safety. The so-called Bletchley Declaration, made at the AI Safety Summit in the UK in November, established a shared understanding of the opportunities and risks posed by frontier AI and recognized the need for governments to work together to meet the most significant challenges associated with the technology.

One of the differences between the Frontier AI Safety Commitments and the Bletchley Declaration is obvious: the new agreement is at the organizational level, while the Bletchley Declaration was made by governments, thus suggesting more regulatory potential associated with future decision-making around AI.

The Frontier commitments also enables “organizations to determine their own thresholds for risk,” which may not potentially be as effective as setting them at a higher level as another of these attempts to regulate AI safety — the EU AI Act — does, noted Maria Koskinen, AI Policy Manager at AI governance technology vendor Saidot.

“The EU AI Act regulates risk management of general-purpose AI models with systemic risks, [which]…are unique to these high-impact general-purpose models,” she said.

So where the Frontier AI Safety Commitments leaves it to the organizations to define their own thresholds, the EU AI Act “provides guidance on this with the introduction of the definition of ‘systemic risk,’” Koskinen noted.

“This gives more certainty not only to organizations implementing these commitments but also to those adopting AI solutions and individuals being impacted by these models,” she said.



Source link

Leave a Comment