US sets AI safety aside in favor of 'AI dominance'


rob dobi/Getty Images

In October 2023, former president Joe Biden signed an executive order that included several measures for regulating AI. On his first day in office, President Trump overturned it, replacing it a few days later with his own order on AI in the US.

This week, some government agencies that enforce AI regulation were told to halt their work, while the director of the US AI Safety Institute (AISI) stepped down. 

Also: ChatGPT’s Deep Research just identified 20 jobs it will replace. Is yours on the list?

So what does this mean practically for the future of AI regulation? Here’s what you need to know. 

What Biden’s order accomplished – and didn’t 

In addition to naming several initiatives around protecting civil rights, jobs, and privacy as AI accelerates, Biden’s order focused on responsible development and compliance. However, as ZDNET’s Tiernan Ray wrote at the time, the order could have been more specific, leaving loopholes available in much of the guidance. Though it required companies to report on any safety testing efforts, it didn’t make red-teaming itself a requirement, or clarify any standards for testing. Ray pointed out that because AI as a discipline is very broad, regulating it needs — but is also hampered by — specificity. 

Brookings report noted in November that because federal agencies absorbed many of the directives in Biden’s order, they may protect them from Trump’s repeal. But that protection is looking less and less likely. 

Also: Why rebooting your phone daily is your best defense against zero-click hackers

Biden’s order established the US AI Safety Institute (AISI), which is part of the National Institute of Standards and Technology (NIST). The AISI conducted AI model testing and worked with developers to improve safety measures, among other regulatory initiatives. In August, AISI signed agreements with Anthropic and OpenAI to collaborate on safety testing and research; in November, it established a testing and national security task force.

On Wednesday, likely due to Trump administration shifts, AISI director Elizabeth Kelly announced her departure from the institute via LinkedIn. The fate of both initiatives, and the institute itself, is now unclear. 

The Consumer Financial Protection Bureau (CFPB) also carried out many of the Biden order’s objectives. For example, a June 2023 CFPB study on chatbots in consumer finance noted that they “may provide incorrect information, fail to provide meaningful dispute resolution, and raise privacy and security risks.” CFPB guidance states lenders have to provide reasons for denying someone credit regardless of whether or not their use of AI makes this difficult or opaque. In June 2024, CFPB approved a new rule to ensure algorithmic home appraisals are fair, accurate, and comply with nondiscrimination law. 

This week, the Trump administration halted work at CFPB, signaling that it may be on the chopping block — which would severely undermine the enforcement of these efforts. 

Also: How AI can help you manage your finances (and what to watch out for)

CFPB is in charge of ensuring companies comply with anti-discrimination measures like the Equal Credit Opportunity Act and the Consumer Financial Protection Act, and has noted that AI adoption can exacerbate discrimination and bias. In an August 2024 comment, CFPB noted it was “focused on monitoring the market for consumer financial products and services to identify risks to consumers and ensure that companies using emerging technologies, including those marketed as ‘artificial intelligence’ or ‘AI,’ do not violate federal consumer financial protection laws.” It also stated it was monitoring “the future of consumer finance” and “novel uses of consumer data.” 

“Firms must comply with consumer financial protection laws when adopting emerging technology,” the comment continues. It’s unclear what body would enforce this if CFPB radically changes course or ceases to exist under new leadership. 

How Trump’s order compares 

On January 23rd, President Trump signed his own executive order on AI. In terms of policy, the single-line directive says only that the US must “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” 

Unlike Biden’s order, terms like “safety,” “consumer,” “data,” and “privacy” don’t appear at all. There are no mentions of whether the Trump administration plans to prioritize safeguarding individual protections or address bias in the face of AI development. Instead, it focuses on removing what the White House called “unnecessarily burdensome requirements for companies developing and deploying AI,” seemingly focusing on industry advancement. 

Also: If you’re not working on quantum-safe encryption now, it’s already too late

The order goes on to direct officials to find and remove “inconsistencies” with it in government agencies — that is to say, remnants of Biden’s order that have been or are still being carried out. 

In March 2024, the Biden administration released an additional memo stating government agencies using AI would have to prove those tools weren’t harmful to the public. Like other Biden-era executive orders and related directives, it emphasized responsible deployment, centering AI’s impact on individual citizens. Trump’s executive order notes that it will review (and likely dismantle) much of this memo by March 24th. 

This is especially concerning given that last week, OpenAI released ChatGPT Gov, a version of OpenAI’s chatbot optimized for security and government systems. It’s unclear when government agencies will get access to the chatbot or whether there will be parameters around how it can be used, though OpenAI says government workers already use ChatGPT. If the Biden memo — which has since been removed from the White House website — is gutted, it’s hard to say whether ChatGPT Gov will be held to any similar standards that account for harm. 

Trump’s AI Action Plan

Trump’s executive order gave his staff 180 days to come up with an AI policy, meaning its deadline to materialize is July 22nd. On Wednesday, the Trump administration put out a call for public comment to inform that action plan. 

The Trump administration is disrupting AISI and CFPB — two key bodies that carry out Biden’s protections — without a formal policy in place to catch fallout. That leaves AI oversight and compliance in a murky state for at least the next six months (millennia in AI development timelines, given the rate at which the technology evolves), all while tech giants become even more entrenched in government partnerships and initiatives like Project Stargate

Also: How AI will transform cybersecurity in 2025 – and supercharge cybercrime

Considering global AI regulation is still far behind the rate of advancement, perhaps it was better to have something rather than nothing. 

“While Biden’s AI executive order may have been mostly symbolic, its rollback signals the Trump administration’s willingness to overlook the potential dangers of AI,” said Peter Slattery, a researcher on MIT’s FutureTech team who led its Risk Repository project. “This could prove to be shortsighted: a high-profile failure — what we might call a ‘Chernobyl moment’ — could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate.”

“We don’t want advanced AI that is unsafe, untrustworthy, or unreliable — no one is better off in that scenario,” he added.  





Source link

Leave a Comment