- 3 handy upgrades in MacOS 15.1 - especially if AI isn't your thing (like me)
- Your Android device is vulnerable to attack and Google's fix is imminent
- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
- One of the most dependable robot vacuums I've tested isn't a Roborock or Roomba
Our Nation Needs Comprehensive AI Legislation, And Soon
By Dr. Allen Badeau, Chief Technology Officer, Empower AI
The White House recently launched an “AI Bill of Rights” framework to lay the groundwork for the future creation and use of Artificial Intelligence (AI).
While this framework includes concrete steps for agencies looking to implement AI, it’s only the latest initiative in a long line of guidelines aimed at outlining the development and implementation of government AI.
Earlier this year, the Department of Energy released an “AI Risk Management Playbook” with recommendations to follow through the AI lifespan. In November 2021, the Department of Defense released “Responsible AI Guidelines,” providing stakeholders and companies a framework to ensure that the AI lifecycle is met with fairness, accountability, and transparency.
These various initiatives are evidence of AI’s rapid growth and adoption in the federal sphere. Yet, they all need more enforceable legislative power to make government-wide AI a reality.
New legislation should streamline the AI development process, creating cohesive and measurable benchmarks for agencies at the beginning stages of their AI journey.
AI is a Siloed Technology
Due to the restricted nature surrounding government access to users and data, agency networks often operate in silos. Unfortunately, this is the same with AI, and the lack of standardized and regulated AI legislation further exacerbates the problem.
AI, by its nature, lends itself to breaking down data silos, as it streamlines data governance and assists with expediting the approval process for agencies sharing data. Still, the lack of fundamental legislation leads to separate AI requirements for each agency and makes cross-agency collaboration more difficult.
For example, the Department of Energy may require a level of AI transparency that the Department of Homeland Security’s AI cannot provide, making collaborating on a project difficult and creating silos of information that cannot be shared between the two agencies. Additionally, the network in which one AI system operates may look completely different from the other, making it even more challenging to share information.
Actionable legislation will help solve this issue. By enforcing government-wide AI regulations and recommendations, agencies can work within each other’s standardized AI networks with confidence that their guidelines are being met, breaking down the data silos created by different frameworks.
However, standardized legislation may be more easily said than done.
A World of AI Regulation
No single framework or legislation can fulfill the mission requirements of all agencies, as AI is often mission-oriented.
Take one critical aspect of AI, ethics, as an example. The AI Bill of Rights does feature guidelines on mitigating discrimination in AI. However, agencies have different ethical considerations and risks to consider. For instance, while the Department of Defense (DoD) must deal with life-and death-decisions for warfighters overseas, the Department of Education must look at student application bias or curriculum prejudice.
However, this doesn’t mean AI legislation can ignore the issue of ethics. Instead, comprehensive legislation should showcase and clarify the plurality of AI by requiring each agency create a framework explicitly designed for their goals while still meeting baseline government-wide criteria.
Incorporating language requiring developers consider the general challenges most AI solutions must address – such as bias, user safety, and implementation – while allowing a level of flexibility within the details can account for specific agency missions.
For example, while both DoD and the Department of Education have different ethical considerations, guidelines such as “AI must only be used to support the safety and development of U.S. residents and citizens” apply to both agencies. Furthermore, requiring detailed guidelines be approved by the legislative branch ensures that each framework is viable and in line with these basic considerations.
NIST has already laid the groundwork for this process by outlining several considerations in its AI Risk Management Framework, listing the characteristics of trustworthy systems as “valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced.” By adding a legal aspect to this list and elaborating on additional considerations, the government can create a foundation for AI legislation.
Artificial intelligence has become increasingly important to government missions within the past couple of years. It can be a powerful tool, expediting decision-making, monitoring and predicting insider threats, aiding in cybersecurity compliance, automating repetitive tasks, assisting with onboarding and offboarding, and more.
However, despite various agency frameworks, legislation is necessary to break free of the data and AI silos in the current government and allow AI to reach its full potential. It will also help level the playing field for the government’s industry technology partners and reinvigorate innovation to ensure America remains a leader in the ever-evolving world of AI.
About the Author
Dr. Allen Badeau is the chief technology officer for Empower AI, as well as the director of the Empower AI Center for Rapid Engagement and Agile Technology Exchange (CREATE) Lab. In this role, he leads innovation research and development activities designed to add value to Empower AI’s customers and their missions, including cutting-edge technologies such as artificial intelligence, quantum computing, software defined networks, robotics, advanced data analytics, agile DevSecOps and many other areas.
With the CREATE Lab, Empower AI is building next-generation, value-based solutions for its customers utilizing a collaborative, knowledge-sharing environment. Under Allen’s direction, the CREATE ecosystem brings together people, processes, and ideas, as well as software and hardware resources — exploring the strategy and implementation of new and advanced technologies and redefining how innovation is integrated into our customers’ environment.
Prior to joining Empower AI, Allen held various leadership roles at ASRC Federal, CSC, Innovative Management & Technology Services and Lockheed Martin. Allen received his bachelor’s in physics, and both his master’s and doctoral degrees in mechanical engineering from West Virginia University.
Allen can be reached online at https://www.linkedin.com/in/allenbadeau/ and at Empower AI’s company website https://www.empower.ai/.