- One of the best pool-cleaning robots I've tested is $450 off for Prime Day
- Apple's M2 MacBook Air is on sale for $749 for Black Friday
- I replaced my desktop with this MSI laptop for a week, and it surpassed my expectations
- AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl
- 3 reasons why you need noise-canceling earbuds ahead of the holidays (and which models to buy)
Report: AI giants grow impatient with UK safety tests
Key AI companies have told the UK government to speed up its safety testing for their systems, raising questions about future government initiatives that too may hinge on technology providers opening up generative AI models to tests before new releases hit the public.
OpenAI, Google DeepMind, Microsoft, and Meta are among companies who have agreed to allow the UK’s new AI Safety Institute (AISI) to evaluate their models, but they aren’t happy with the current pace or transparency of the evaluation, according to a published report in the Financial Times, which cited sources close to the companies.
Despite their willingness to amend the models if the AISI finds flaws in their technology, the companies are under no obligation to change or delay releases of the technology based on the test outcomes, the sources said.
The companies’ pushback on the AISI evaluation includes wanting more details of the tests that are being conducted, how long they will take, and how the feedback process will work, according to the report. It’s also unclear whether the testing will need to be submitted every time there is even a slight update to the model, which is something AI developers may find too onerous to consider.
Murky process, murky outcomes
The AI vendors’ reservations appear to be valid, given how murky details are on how exactly the evaluation actually works. And with other governments considering similar AI safety evaluations, any current confusion with the UK process will only grow as additional government bodies make the same, for now, voluntary demands on AI developers.
The UK government said that testing of the AI models already has begun through collaboration with their respective developers, according to the Financial Times. The testing is based on access to capable AI models for pre-deployment testing — even unreleased models, such as Google’s Gemini Ultra — which was one of the key agreements companies signed up for at the UK’s AI Safety Summit in November, according to the report.