- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
- 세일포인트 기고 | 2025년을 맞이하며… 머신 아이덴티티의 부상이 울리는 경종
NIST launches ambitious effort to assess LLM risks
The Biden Administration “is focused on keeping up with constantly evolving technology,” which is something that many administrations have struggled with, arguably unsuccessfully, said Brian Levine, a managing director at Ernst & Young. Indeed, Levine said that he sees some current efforts — especially with generative AI — potentially going in the opposite direction, with US and global regulators digging in “too early, while the technology is still very much in flux.”
In this instance, though, Levine said that he saw the NIST efforts as promising, given NIST’s long and illustrious history of accurately conducting a wide range of technology testing. One of NIST’s first decisions, he said, will be to figure out “the type of AI code that is the best to test here.” Some of that may be influenced by which organizations volunteer to have their code examined, he said.
Some AI officials said that it would be difficult to analyze LLMs in a vacuum, given that risks are dictated by their use. Still, Prins said that evaluating just the code is valuable.
“In security, a workforce needs to be trained on security best practices, but that doesn’t negate the value of anti-phishing software. The same logic applies to AI safety and security: These issues are big and need to be addressed from a lot of different angles,” Prins said. “How people abuse AI is a problem and that should be addressed in other ways, but this is still a technology — any improvements we can make to simplify how we use safe and secure systems is beneficial in the long run.”