- Windows 11 24H2 hit by a brand new bug, but there's a workaround
- This Samsung OLED spoiled every other TV for me, and it's $1,400 off for Black Friday
- NetBox Labs launches tools to combat network configuration drift
- Navigating the Complexities of AI in Content Creation and Cybersecurity
- Russian Cyber Spies Target Organizations with Custom Malware
The complex patchwork of US AI regulation has already arrived
The second category focuses on specific sectors, particularly high-risk uses of AI to determine or assist with decisions related to employment, housing, healthcare, and other major life issues. For example, New York City Local Law 144, passed in 2021, prohibits employers and employment agencies from using an AI tool for employment decisions unless it has been audited in the previous year. A handful of states, including New York, New Jersey, and Vermont, appear to have modeled legislation after the New York City law, Mahdavi says.
The third category of AI bills covers broad AI bills, often focused on transparency, preventing bias, requiring impact assessment, providing for consumer opt-outs, and other issues. These bills tend to impose regulations both on AI developers and deployers, Mahdavi says.
Addressing the impact
The proliferation of state laws regulating AI may cause organizations to rethink their deployment strategies, with an eye on compliance, says Reade Taylor, founder of IT solutions provider Cyber Command.
“These laws often emphasize the ethical use and transparency of AI systems, especially concerning data privacy,” he says. “The requirement to disclose how AI influences decision-making processes can lead companies to rethink their deployment strategies, ensuring they align with both ethical considerations and legal requirements.”
But a patchwork of state laws across the US also creates a challenging environment for businesses, particularly small to midsize companies that may not have the resources to monitor multiple laws, he adds.
A growing number of state laws “can either discourage the use of AI due to the perceived burden of compliance or encourage a more thoughtful, responsible approach to AI implementation,” Taylor says. “In our journey, prioritizing compliance and ethical considerations has not only helped mitigate risks but also positioned us as a trusted partner in the cybersecurity domain.”
The number of state laws focused on AI have some positive and potentially negative effects, adds Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills. On the plus side, many of the state bills promote best practices in privacy and data security, she says.
“On the other hand, the diversity of regulations across states presents a challenge, potentially discouraging businesses due to the complexity and cost of compliance,” Fischer adds. “This fragmented regulatory environment underscores the call for national standards or laws to provide a coherent framework for AI usage.”
Organizations that proactively monitor and comply with the evolving legal requirements can gain a strategic advantage. “Staying ahead of the legislative curve not only minimizes risk but can also foster trust with consumers and partners by demonstrating a commitment to ethical AI practices,” Fischer says.
Mahdavi also recommends that organizations not wait until the regulatory landscape settles. Companies should first take an inventory of the AI products they’re using. Organizations should rate the risk of every AI they use, focusing on products that make outcome-based decisions in employment, credit, healthcare, insurance, and other high-impact areas. Companies should then establish an AI use governance plan.
“You really can’t understand your risk posture if you don’t understand what AI tools you’re using,” she says.