White House requires agencies to create AI safeguards, appoint CAIOs

AI’s impact on public safety

The policy defines several uses of AI that could impact public safety and human rights, and it requires agencies to put safeguards in place by Dec. 1. The safeguards must include ways to mitigate the risks of algorithmic discrimination and provide the public with transparency into government AI use.

Agencies must stop using AIs that can’t meet the safeguards. They must notify the public of any AI exempted from complying with the OMB policy and explain the justifications.

AIs that control dams, electrical grids, traffic control systems, vehicles, and robotic systems within workplaces fall under safety-impacting AIs. Meanwhile, AIs that block or remove protected speech, produce risk assessments of individuals for law enforcement agencies, and conduct biometric identification are classified as rights-impacting. AI decisions about healthcare, housing, employment, medical diagnosis, and immigration status also fall into the rights-impacting category.

The OMB policy also calls on agencies to release government-owned AI code, models, and data, when the releases do not pose a risk to the public or government operations.

The new policy received mixed reviews from some human rights and digital rights groups. The American Civil Liberties Union called the policy an important step toward protecting US residents against AI abuses. But the policy has major holes in it, including broad exceptions for national security systems and intelligence agencies, the ACLU noted. The policy also has exceptions for sensitive law enforcement information.

“Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” Cody Venzke, senior policy counsel with the ACLU, said in a statement. “Policymakers must step up to fill in those gaps and create the protections we deserve.”  



Source link