- This Eufy twin-turbine robot vacuum is a steal at $350 for Black Friday
- The robot vacuum that kept my floors free of muddy paw prints this fall is $600 off
- Here's how to get the ultimate Kindle bundle for $135 this Black Friday (plus more ways to mix and match deals)
- This racecar-looking robot mower mows a gorgeous lawn and is on sale for Black Friday
- I tested the world's first thermal phone camera with a 50Hz refresh rate, and here are the results (get $75 off in this Black Friday deal)
NIST Establishes AI Safety Consortium
The National Institute of Standards and Technology established the AI Safety Institute on Feb. 7 to determine guidelines and standards for AI measurement and policy. U.S. AI companies and companies that do business in the U.S. will be affected by those guidelines and standards and may have the opportunity to have input about them.
What is the U.S. AI Safety Institute consortium?
The U.S. AI Safety Institute is a joint public and private sector research group and data-sharing space for “AI creators and users, academics, government and industry researchers, and civil society organizations,” according to NIST.
Organizations could apply to become members between Nov. 2, 2023 and Jan. 15, 2024. Out of more than 600 interested organizations, NIST chose 200 companies and organizations to become members. Participating organizations include Apple, Anthropic, Cisco, Hewlett Packard Enterprise, Hugging Face, Microsoft, Meta, NVIDIA, OpenAI, Salesforce and other companies, academic institutions and research organizations.
Those members will work on projects including:
- Developing new guidelines, tools, methods, protocols and best practices to contribute to industry standards for developing and deploying safe, secure and trustworthy AI.
- Developing guidance and benchmarks for identifying and evaluating AI capabilities, especially those capabilities that could cause harm.
- Developing approaches to incorporate secure development practices for generative AI.
- Developing methods and practices for successfully red-teaming machine learning.
- Developing ways to authenticate AI-generated digital content.
- Specifying and encouraging AI workforce skills.
“Responsible AI offers enormous potential for humanity, businesses and public services, and Cisco firmly believes that a holistic, simplified approach will help the U.S. safely realize the full benefits of AI,” said Nicole Isaac, vice president, global public policy at Cisco, in a statement to NIST.
SEE: What are the differences between AI and machine learning? (TechRepublic Premium)
“Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI,” said Nick Clegg, president of global affairs at Meta, in a statement to NIST. “We’re enthusiastic about being part of this consortium and working closely with the AI Safety Institute.”
An interesting omission on the list of U.S. AI Safety Institute members is the Future of Life Institute, a global nonprofit with investors including Elon Musk, established to prevent AI from contributing to “extreme large-scale risks” such as global war.
The creation of the AI Safety Institute and its place in the federal government
The U.S. AI Safety Institute was created as part of the efforts set in place by President Joe Biden’s Executive Order on AI proliferation and safety in October 2023.
The U.S. AI Safety Institute falls under the jurisdiction of the Department of Commerce. Elizabeth Kelly is the institute’s inaugural director, and Elham Tabassi is its chief technology officer.
Who is working on AI safety?
In the U.S., AI safety and regulation at the government level is handled by NIST, and, now, the U.S. AI Safety Institute under NIST. The major AI companies in the U.S. have worked with the government on encouraging AI safety and skills to help the AI industry build the economy.
Academic institutions working on AI safety include Stanford University and University of Maryland and others.
A group of international cybersecurity organizations established the Guidelines for Secure AI System Development in November 2023 to address AI safety early in the development cycle.