- Mastering Azure management: A comparative analysis of leading cloud platforms
- Sweat the small stuff: Data protection in the age of AI
- GAO report says DHS, other agencies need to up their game in AI risk assessment
- This LG Bluetooth speaker impressed me with a design feature I've yet to see on competitors
- Amazon's AI Shopping Guides helps you research less and shop more. Here's how it works
Sweat the small stuff: Data protection in the age of AI
As concerns about AI security, risk, and compliance continue to escalate, practical solutions remain elusive. While NIST released NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile on July 26, 2024, most organizations are just beginning to digest and implement its guidance, with the formation of internal AI Councils as a first step in AI governance. So as AI adoption and risk increases, it’s time to understand why sweating the small and not-so-small stuff matters and where we go from here.
Data protection in the AI era
Recently, I attended the annual member conference of the ACSC, a non-profit organization focused on improving cybersecurity defense for enterprises, universities, government agencies, and other organizations. From the discussions, it is clear that today, the critical focus for CISOs, CIOs, CDOs, and CTOs centers on protecting proprietary AI models from attack and protecting proprietary data from being ingested by public AI models.
While a smaller number of organizations are concerned about the former problem, those in this category realize that they must protect against prompt injection attacks that cause models to drift, hallucinate, or completely fail. In the early days of AI deployment, there was no well-known incident equivalent to the 2013 Target breach that represented how an attack might play out. Most of the evidence is academic at this point in time. However, executives who have deployed their own models have begun to focus on how to protect their integrity, given it will be only a matter of time before a major attack becomes public information, resulting in brand damage and potentially greater harm.