- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
- Two free ways to get a Perplexity Pro subscription for one year
- The 40+ best Black Friday PlayStation 5 deals 2024: Deals available now
- The 25+ best Black Friday Nintendo Switch deals 2024
NIST launches ambitious effort to assess LLM risks
The Biden Administration “is focused on keeping up with constantly evolving technology,” which is something that many administrations have struggled with, arguably unsuccessfully, said Brian Levine, a managing director at Ernst & Young. Indeed, Levine said that he sees some current efforts — especially with generative AI — potentially going in the opposite direction, with US and global regulators digging in “too early, while the technology is still very much in flux.”
In this instance, though, Levine said that he saw the NIST efforts as promising, given NIST’s long and illustrious history of accurately conducting a wide range of technology testing. One of NIST’s first decisions, he said, will be to figure out “the type of AI code that is the best to test here.” Some of that may be influenced by which organizations volunteer to have their code examined, he said.
Some AI officials said that it would be difficult to analyze LLMs in a vacuum, given that risks are dictated by their use. Still, Prins said that evaluating just the code is valuable.
“In security, a workforce needs to be trained on security best practices, but that doesn’t negate the value of anti-phishing software. The same logic applies to AI safety and security: These issues are big and need to be addressed from a lot of different angles,” Prins said. “How people abuse AI is a problem and that should be addressed in other ways, but this is still a technology — any improvements we can make to simplify how we use safe and secure systems is beneficial in the long run.”