- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
AI Safety Summit: What to expect as global leaders eye AI regulation
What to do about those risks, both existential and everyday, is less clear.
The UK government’s first suggestion is “responsible capability scaling” — asking industry to set its own risk thresholds, assess the threat its models pose, choose to follow less risky paths, and to specify in advance what it will do if something goes wrong.
At a national level, the UK government is suggesting it and other countries monitor what enterprises are up to, and perhaps require enterprises to obtain a license for some AI activities.
As for international collaboration and regulation, more research is needed, the UK government says. It’s inviting other countries to talk about how they can work together to talk about the most urgent areas for research, and where promising ideas are already emerging.
Who is attending the AI Safety Summit?
When the UK government first announced the summit, its intention was to include “country leaders” from the world’s largest economies, alongside academics and representatives of tech companies leading AI development, with a view to set a new global regulatory agenda.
A week or two before the summit, though, reports emerged that leaders of several countries with strong AI industries were unlikely to attend, raising doubts about how effective the summit will be.
French President Emmanuel Macron will not be there, and German Chancellor Olaf Scholz is unlikely to show up either, European political news site Politico.eu reported. US President Joe Biden will not attend either, although Vice President Kamala Harris may.
While some of the European Union’s biggest member states are disengaging from the summit, the bloc as a whole will be well-represented. European Commission President Ursula von der Leyen will be there and, according to her official engagement calendar, she plans to meet Secretary-General of the United Nations António Guterres at the event.
Meanwhile, European Commission Vice-President Věra Jourová’s calendar indicates she’ll meet South Korean Minister of Science and ICT Lee Jong-ho there.
Google DeepMind CEO Demis Hassabis is expected to be among the 100 or so attendees — a safe bet since the company was founded in London and maintains its headquarters there.
The UK government has been playing up the recent decisions of a number of other AI companies to open offices in London, including ChatGPT developer OpenAI and Anthropic, whose CEO Dario Amodei is reportedly also attending. Palantir Technologies, too, has announced plans to move its European headquarters to the UK, and is said to be sending a representative to the event. A Microsoft representative will also reportedly attend, although not its CEO.
Where else are AI directions being set?
The UK’s AI Safety Summit is far from the only place that governments and enterprises are attempting to influence AI policy and development.
One of the first big attempts of a commitment to ethical AI in the enterprise was the Rome Call. In 2020, Microsoft and IBM signed on to a non-denominational initiative of the Vatican to promote six principals of AI development: transparency, inclusion, responsibility, impartiality, reliability, and security/privacy.
Since then, legislative, regulatory, industry, and civil society initiatives have multiplied. The European Union’s all-encompassing Artificial Intelligence Act seemed ahead of its time and full of good intention, but has drawn criticism and calls for stronger action from civil society groups, including Statewatch and service workers’ union Uni Europa.
Also, the White House has secured voluntary commitments to AI safety standards from seven of the largest AI developers, the Cyberspace Administration of China has issued regulations on generative AI training, and New York City has set rules on the use of AI in hiring.
Even the United Nations Security Council has been debating the issue.
Software developers are joining in, too. The Frontier Model Forum is the industry’s attempt to get ahead of state or international controls by demonstrating its members — including Microsoft, Google, Anthropic, and OpenAI — can be good global citizens through self-regulation.
All this activity puts the UK AI Safety Summit in a highly competitive environment, with legislators competing on the one hand to create a safe environment for their citizens, free from the menace of opaque automated discrimination or even — if the most alarmist critics are to be believed — global extinction, while on the other hand allowing businesses to innovate and benefit from the increases in productivity that AI may enable.
Who gets to set those regulations, and who will have to abide by them, is unlikely to be decided any time soon, much less this week.