- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
- If ChatGPT produces AI-generated code for your app, who does it really belong to?
- The best iPhone power banks of 2024: Expert tested and reviewed
This tool tests AI's resilience to 'poisoned' data
The National Institute of Standards and Technology (NIST) is re-releasing a tool that tests how susceptible artificial intelligence (AI) models are to being “poisoned” by malicious data.
The move comes nine months after President Biden’s Executive Order on the safe, secure, and trustworthy development of AI, and is a direct response to that order’s requirement that NIST help with model testing. NIST also recently launched a program that helps Americans use AI without falling prey to synthetic, or AI-generated, content and that promotes AI development for the benefit of society.
The tool, called Dioptra, was initially released two years ago and aims to help small- to medium-sized businesses and government agencies. Using the tool, someone can determine what sort of attacks would make their AI model perform less effectively and quantify the reduction in performance to see the conditions that made the model fail.
Also: Beware of AI ‘model collapse’: How training on synthetic data pollutes the next generation
Why does this matter?
It’s critical that organizations take steps to ensure AI programs are safe. NIST is actively encouraging federal agencies to utilize AI in various systems. AI models train on existing data, and if someone purposefully injects malicious data — say, data that made the AI ignore stop signs or speed limits — NIST points out, the results could be disastrous.
Despite all the transformative benefits of AI, NIST Director Laurie E. Locascio says the technology brings along risks that are far greater than those associated with other types of software. “These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation,” she notes in the release.
Also: Safety guidelines provide necessary first layer of data protection in AI gold rush
Dioptra can test multiple combinations of attacks, defenses, and model architectures to better understand which attacks may pose the greatest threats, NIST says, and what solutions might be best.
The tool doesn’t promise to take away all risks, but it does claim to help mitigate risk while still supporting innovation. It’s available to download for free.