UK’s AI Safety Institute Unveils Platform to Accelerate Safe AI Develo


The UK’s AI Safety Institute has made its AI testing and evaluation platform available to the global AI community as of 10 May, 2024.

The platform, called Inspect, is set to pave the way for the safe innovation of AI models, according to the AI Safety Institute and Department for Science, Innovation and Technology (DIST).

By making Inspect available to the global community, the Institute said it is helping accelerate the work on AI safety evaluations carried out internationally. The aim is that this leads to better safety testing and the development of more secure models.

It also allows for a consistent approach to AI safety evaluations around the world. according to the government.

Inspect is a software library which enables testers – from start-ups, academia and AI developers to international governments – to assess specific capabilities of individual models and then produce a score based on their results.

Inspect can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open-source license, it means Inspect it is now freely available for the AI community to use.

AI Safety Institute Chair, Ian Hogarth, commented: “We have been inspired by some of the leading open source AI developers – most notably projects like GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back.”

“We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.”

The UK AI Safety Institute was announced by British Prime Minister Rishi Sunak at the AI Safety Summit, held in Bletchley Park, England, in November, 2023.

At the time, Sunak said the UK government’s ambition for this new entity is to make it a global hub tasked with testing the safety of emerging types of AI.



Source link