- Decoding the Tech Industry
- Get a free Google Pixel 9 phone with this T-Mobile Black Friday deal
- Save up to $1,100 on this Sony Bravia 7 and soundbar bundle at Amazon for Black Friday
- iPad (2022) vs. iPad Air (2022): Which one's really better for you?
- Navigating the Future of Cisco Distribution: Insights from the Black Belt Academy
OpenAI, Microsoft, Google and Anthropic Form Body to Regulate AI
You may have heard Sam Altman, the man behind ChatGPT, call for the regulation of future AI models while at the same time his company OpenAI lobbied the EU to water down its own AI Act.
OpenAI and generative AI pioneers Google, Microsoft and Anthropic are now taking Altman’s pledge a step further, launching the Frontier Model Forum.
Announced on July 27, 2023, the Forum will be an industry body designed to ensure the “safe and responsible development” of so-called “frontier AI” models.
The term “frontier AI” was coined by OpenAI, which described it in a July 6 white paper as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.”
In their joint statement, the four founding members of the Frontier Model Forum made it clear that frontier AI models refer to “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.”
The Forum will thus solely focus on future models.
Anna Makanju, VP of global affairs at OpenAI, explained the choice of focusing on future AI models: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
Benchmarks and Best Practices as A Primary Focus
The objectives for the Forum include:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology
- Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats
During the Forum’s first year, its members will focus on the first three key areas listed above. Their first tasks will include “advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.”
The founding members will establish an advisory board “over the coming months” to help guide the Forum’s strategy and priorities.
Read more: EU Passes Landmark Artificial Intelligence Act
In the joint statement, Brad Smith, vice chair and president of Microsoft, insisted on the responsibility of AI developers: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Open to Other Members
The Forum membership is open to “other organizations developing and deploying frontier AI models as defined by the Forum” that meet two criteria:
- They demonstrate strong commitment to frontier model safety, including through technical and institutional approaches
- They are willing to contribute to advancing the Frontier Model Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative
Kent Walker, Google’s president of global affairs, said: “We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. Engagement by companies, governments, and civil society will be essential to fulfill the promise of AI to benefit everyone.”
Dario Amodei, CEO of Anthropic, used a similar language: “We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”
Is Self-Regulation a Diversion from Strict Regulation?
The announcement came six days after the White House secured a voluntary commitment to AI safety from the four members of the Frontier Model Forum as well as Amazon, Inflection AI and Meta.
While they promised to mention when contents are AI-generated and to allow independent audits on their models, some analysts criticized the fact that they didn’t commit to transparency on their models’ training.
Encode Justice, an NGO promoting a “human-centered artificial intelligence,” raised concerns about these self-regulating initiatives and insisted it shouldn’t take the conversation away from stricter, independent regulation. “While a promising follow-up to last week’s commitments, Big Tech companies’ announcement today of the Frontier Model Forum means little without concrete steps and new norms for AI safety. Self-regulation is no substitute for government action,” the NGO said on Twitter.
Andrew Strait, associate director of the UK-based Ada Lovelace Institute, shares the NGO’s concern. On Twitter, he dismissed the term ‘frontier model,’ saying it’s “an undefinable moving-target term that excludes the existing models from governance, regulation, and attention.”
Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties (ICCL), also criticized the initiative: “A Forum of companies that have failed in responsible development of AI systems of ‘non-frontier’ models will now be responsible for ‘frontier models’? It seems to me that these companies don’t fulfill the membership criteria that they have formulated.”
According to Time magazine, over the past few months, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general-purpose AI systems, including GPT-3, GPT3.5 and GPT-4, to be “high risk,” – which would mean they would be strictly regulated under the new regulation.