- Upgrade to Microsoft Office Pro and Windows 11 Pro with this bundle for 87% off
- Get 3 months of Xbox Game Pass Ultimate for 28% off
- Buy a Microsoft Project Pro or Microsoft Visio Pro license for just $18 with this deal
- How I optimized the cheapest 98-inch TV available to look and sound incredible (and it's $1,000 off)
- The best blood pressure watches of 2024
White House requires agencies to create AI safeguards, appoint CAIOs
AI’s impact on public safety
The policy defines several uses of AI that could impact public safety and human rights, and it requires agencies to put safeguards in place by Dec. 1. The safeguards must include ways to mitigate the risks of algorithmic discrimination and provide the public with transparency into government AI use.
Agencies must stop using AIs that can’t meet the safeguards. They must notify the public of any AI exempted from complying with the OMB policy and explain the justifications.
AIs that control dams, electrical grids, traffic control systems, vehicles, and robotic systems within workplaces fall under safety-impacting AIs. Meanwhile, AIs that block or remove protected speech, produce risk assessments of individuals for law enforcement agencies, and conduct biometric identification are classified as rights-impacting. AI decisions about healthcare, housing, employment, medical diagnosis, and immigration status also fall into the rights-impacting category.
The OMB policy also calls on agencies to release government-owned AI code, models, and data, when the releases do not pose a risk to the public or government operations.
The new policy received mixed reviews from some human rights and digital rights groups. The American Civil Liberties Union called the policy an important step toward protecting US residents against AI abuses. But the policy has major holes in it, including broad exceptions for national security systems and intelligence agencies, the ACLU noted. The policy also has exceptions for sensitive law enforcement information.
“Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” Cody Venzke, senior policy counsel with the ACLU, said in a statement. “Policymakers must step up to fill in those gaps and create the protections we deserve.”