- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- Nile unwraps NaaS security features for enterprise customers
- Even Nvidia's CEO is obsessed with Google's NotebookLM AI tool
- This Kindle accessory seriously improved my reading experience (and it's on sale for Black Friday)
G7 Countries Establish Voluntary AI Code of Conduct
The code of conduct provides guidelines for AI regulation across G7 countries and includes cybersecurity considerations and international standards.
The Group of Seven countries have created a voluntary AI code of conduct, released on October 30, regarding the use of advanced artificial intelligence. The code of conduct focuses on but is not limited to foundation models and generative AI.
As a point of reference, the G7 countries are the U.K., Canada, France, Germany, Italy, Japan and the U.S., as well as the European Union.
Jump to:
What is the G7’s AI code of conduct?
The G7’s AI code of conduct, more specifically called the “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems,” is a risk-based approach that intends “to promote safe, secure and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems.”
The code of conduct is part of the Hiroshima AI Process, which are a series of analyses, guidelines and principles for project-based cooperation across G7 countries.
What does the G7 AI code of conduct say?
The 11 guiding principles of the G7’s AI code of conduct quoted directly from the report are:
-
- Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate and mitigate risks across the AI lifecycle.
- Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.
- Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.
- Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society and academia.
- Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies and mitigation measures.
- Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
- Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
- Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
- Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.
- Advance the development of and, where appropriate, adoption of international technical standards.
- Implement appropriate data input measures and protections for personal data and intellectual property.
What does the G7 AI code of conduct mean for businesses?
Ideally, the G7 framework will help ensure that businesses have a straightforward and clearly defined path to comply with any regulations they may encounter around AI usage. In addition, the code of conduct provides a practical framework for how organizations can approach the use and creation of foundation models and other artificial intelligence products or applications for international distribution. The code of conduct also provides business leaders and employees alike with a clearer understanding of what ethical AI use looks like and they can use AI to create positive change in the world.
Although this document provides useful information and guidance to G7 countries and organizations that choose to use it, the AI code of conduct is voluntary and non-binding.
What is the next step after the G7 AI code of conduct?
The next step is for G7 members to create the Hiroshima AI Process Comprehensive Policy Framework by the end of 2023, according to a White House statement. The G7 plans to “introduce monitoring tools and mechanisms to help organizations stay accountable for the implementation of these actions” in the future, according to the Hiroshima Process.
SEE: Organizations wanting to implement an AI ethics policy should check out this TechRepublic Premium download.
“We (the leaders of G7) believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment where safe, secure and trustworthy AI systems are designed, developed, deployed and used to maximize the benefits of the technology while mitigating its risks, for the common good worldwide,” the White House statement reads.
Other international regulations and guidance for the use of AI
The EU’s AI Act is a proposed act currently under discussion in the European Union Parliament; it was first introduced in April 2023 and amended in June 2023. The AI Act would create a classification system under which AI systems are regulated according to possible risks. Organizations which do not follow the Act’s obligations, including prohibitions, correct classification or transparency, would face fines. The AI Act has not yet been adopted.
On October 26, U.K. prime minister Rishi Sunak announced plans for an AI Safety Institute, which would assess risks from AI and include input from several countries, including China.
U.S. president Joe Biden released an executive order on October 30 detailing guidelines for the development and safety of artificial intelligence.
The U.K. held an AI Safety Summit on November 1 and 2, 2023. At the summit, the U.K., U.S. and China signed a declaration stating that they would work together to design and deploy AI in a way that is “human-centric, trustworthy and responsible.” Find TechRepublic coverage of this summit here.