- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- OpenAI updates GPT-4o, reclaiming its crown for best AI model
- Nile unwraps NaaS security features for enterprise customers
UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety
Global leaders from 28 nations have gathered in the U.K. for an influential summit dedicated to AI regulation and safety. Here’s what you need to know.
Representatives from 28 countries and tech companies convened on the historic site of Bletchley Park in the U.K. for a landmark two-day summit held Nov. 1-2, 2023, focusing on the safety and regulation of artificial intelligence. Day one of the AI Safety Summit culminated in the signing of the “landmark” Bletchley Declaration on AI Safety, which commits 28 participating countries — including the U.K., U.S. and China — to jointly manage and mitigate risks from artificial intelligence while ensuring safe and responsible development and deployment.
Jump to:
What is the Bletchley Declaration on AI safety?
The Bletchley Declaration states that developers of advanced and potentially dangerous AI technologies shoulder a significant responsibility for ensuring their systems are safe through rigorous testing protocols and safety measures to prevent misuse and accidents.
It also emphasizes the need for common ground in understanding AI risks and fostering international research partnerships in AI safety while recognizing that there is “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
U.K. Prime Minister Rishi Sunak called the signing of the declaration “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI.”
In a written statement, Sunak said: “Under the UK’s leadership, more than twenty five countries at the AI Safety Summit have stated a shared responsibility to address AI risks and take forward vital international collaboration on frontier AI safety and research.
“The UK is once again leading the world at the forefront of this new technological frontier by kickstarting this conversation, which will see us work together to make AI safe and realize all its benefits for generations to come.”
What is the AI Safety Summit?
The AI Safety Summit is a major conference taking place on November 1 and 2, 2023 in Buckinghamshire, U.K. It will bring together international governments, technology companies and academia to consider the risks of AI “at the frontier of development” and discuss how these risks can be mitigated through a united, global effort.
The inaugural day of the AI Safety Summit saw a series of talks from business leaders and academics aimed at promoting a deeper understanding of what the U.K. government has dubbed “frontier AI” — advanced artificial intelligence systems that could pose as-yet unknown risks to society.
This included a number of roundtable discussions with “key developers,” including OpenAI, Anthropic and U.K.-based Google DeepMind, that centered on how risk thresholds, effective safety assessments and robust governance and accountability mechanisms can be defined.
SEE: ChatGPT Cheat Sheet: Complete Guide for 2023 (TechRepublic)
The first day of the summit also featured a virtual address by King Charles III, who labeled AI one of humanity’s “greatest technological leaps” and highlighted the technology’s potential in transforming healthcare and various other aspects of life. The British Monarch called for robust international coordination and collaboration to ensure AI remains a secure and beneficial technology.
Day two of the summit will feature a press conference and closing remarks from Prime Minister Sunak.
Who is attending the AI Safety Summit?
Representatives from the Alan Turing Institute, Stanford University, the Organisation for Economic Co-operation and Development and the Ada Lovelace Institute are among the attendees at the AI Safety Summit, alongside tech companies including Google, Microsoft, IBM, Meta and AWS, as well as leaders such as SpaceX boss Elon Musk. Also in attendance is U.S. Vice President Kamala Harris.
What are experts’ reactions to the AI Safety Summit?
Poppy Gustafsson, chief executive officer of AI cybersecurity company Darktrace, told PA Media she had been concerned that discussions would focus too much on “hypothetical risks of the future” — like killer robots — but that the discussions were more “measured” in reality.
Rajesh Ganesan, president of Zoho-owned ManageEngine, commented in an email statement that, “While some may be disappointed if the summit falls short of establishing a global regulatory body,” the fact that global leaders were discussing AI regulation was a positive step forward.
“Gaining international agreement on the mechanisms for managing the risks posed by AI is a significant milestone — greater collaboration will be paramount to balancing the benefits of AI and limiting its damaging capacity,” Ganesan said in a statement.
“It’s clear that regulation and security practices will remain critical to the safe adoption of AI and must keep pace with its rapid advancements. This is something that the EU’s AI Act and the G7 Code of Conduct agreements could drive and provide a framework for.”
Ganesan added: “We need to prioritize ongoing education and give people the skills to use generative AI systems securely and safely. Failing to make AI adoption about the people who use and benefit from it risks dangerous and suboptimal outcomes.”
Why is AI safety important?
There is currently no comprehensive set of regulations governing the use of artificial intelligence, though the European Union has drafted a framework that aims to establish rules for the technology in the 28-nation bloc.
The potential misuse of AI, either maliciously or via human or machine error, remains a key concern. The summit heard that cybersecurity vulnerabilities, biotechnological dangers and the spread of disinformation represented some of the most significant threats posted by AI, while issues with algorithmic bias and data privacy were also highlighted.
U.K. Technology Secretary Michelle Donelan emphasized the importance of the Bletchley Declaration as a first step in ensuring the safe development of AI. She also stated that international cooperation was essential to building public trust in AI technologies, adding that “no single country can face down the challenges and risks posed by AI alone.”
She noted: “Today’s landmark Declaration marks the start of a new global effort to build public trust by ensuring the technology’s safe development.”
How has the UK invested in AI?
On the eve of the UK AI Safety Summit, the UK government announced £118 million ($143 million) funding to boost AI skills funding in the United Kingdom. The funding will target research centers, scholarships and visa schemes and aims to encourage young people to study AI and data science fields.
Meanwhile, £21 million ($25.5 million) has been earmarked for equipping the U.K.’s National Health Service with AI-powered diagnostic technology and imaging technology, such as X-rays and CT scans.