- I trust this smart lock to keep my home and family safe, and it's on sale
- Your data's probably not ready for AI - here's how to make it trustworthy
- Infoblox, Google Cloud partner to protect hybrid and multicloud enterprise resources
- Google joins OpenAI in adopting Anthropic's protocol for connecting AI agents - why it matters
- Canva just dropped 6 exciting AI features in its biggest update in years
AI incident reporting shortcomings leave regulatory safety hole

Novel problems
Without an adequate incident reporting framework, systemic problems could set in.
AI systems could directly harm the public, for example through improperly revoking access to social security payments, according to CLTR, which looked closely at the situation in the UK, although its findings could also apply to many other countries.
The UK government’s Department for Science, Innovation & Technology (DSIT) lacks a central, up-to-date picture of incidents involving AI systems as they emerge, according to CLTR. “Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI,” it said, referring to the high-powered generative AI models at the cutting edge of the industry.