- Why neglecting AI ethics is such risky business - and how to do AI right
- You should probably clear your TV cache right now (and why it makes such a big difference)
- This secret Pixel camera feature makes your photos look more vibrant - how to turn it on
- Finally, a battery-powered outdoor camera that gets bright enough for darker spaces
- I tested a smart tracker that's thinner than Apple AirTags - and they're even more versatile
GAO report says DHS, other agencies need to up their game in AI risk assessment

Nobody knows the probability of harm
The GAO said it is “recommending that DHS act quickly to update its guidance and template for AI risk assessments to address the remaining gaps identified in this report.” DHS, in turn, it said, “agreed with our recommendation and stated it plans to provide agencies with additional guidance that addresses gaps in the report including identifying potential risks and evaluating the level of risk.”
Peter Rutten, research vice president at IDC, who specializes in performance intensive computing, said Friday that his take is, “indeed, no DHS agency knows the full extent or probability of harm that AI can do to the US critical infrastructure. I’d argue that, today, no entity knows the full extent or probability of harm that AI can do in general — whether it is an enterprise, government, academia, you name it.”
AI, he said, “is being pushed out to businesses and consumers by organizations that profit from doing so, and assessing and addressing the potential harm it may cause has until recently been an afterthought. We are now seeing more focus on these potential negative effects, but efforts to contain them, let alone prevent them, will always be far behind the steamroller of new innovations in the AI realm.”