AI security for CISOs: A dynamic and practical framework


Few technologies have infiltrated the business world as rapidly — and with so much enthusiasm from leaders — as artificial intelligence (AI) and machine learning (ML). By some reports, two-thirds of enterprises are already using generative AI, citing benefits like productivity gains and improved customer relationship management. 

But with the excitement for AI and ML comes security and privacy concerns. Leaders consistently point to security as a top concern when adopting generative AI. Even CISOs, who are accustomed to the ins and outs of technological change within their organizations, are nervous: they need to understand the ins and outs of AI, balance innovation with security and ensure any changes have a net positive impact on their business in the long run. Without the proper foundation or understanding of how to secure their AI systems and address key security concerns, innovation slows or gets stuck. 

To address this need, my team compiled input from 100+ CISOs from a traveling workshop program on the topic and 15 industry AI experts to develop a framework (the Databricks AI Security Framework) to guide CISOs through bringing the power of generative AI to their organizations securely and at scale — and to combat the pattern of security teams slowing down AI due to their lack of confidence in successfully securing the system. We’ve gleaned some interesting insights in speaking with global CISOs about this problem. Here’s what we’ve learned.

Why CISOs are “slowing down” AI 

The concerns among CISOs boil down to two key problems: there’s a lack of specific guidance around AI, and consequently, CISOs are forced to operate without enough knowledge in this important area. The lion’s share of guidance, including frameworks from NIST, international organizations and leading technology companies, do an excellent job of organizing risks in AI and ML systems at the highest level. But many of them lack the execution-oriented specificity technical leaders need right now. 

Throughout our series of CISO workshops, it’s become clear that AI is still too new for most CISOs to have had the time or space to get a clear understanding of what the system entails or where to start with security — and the business side of their organizations may not have as much patience when it comes to getting started with AI. It’s like CISOs being told to “secure the building” without any details about the building or what they’re securing. The necessary context — how many floors does it have? What are the doors like? What is the surrounding area like? — is essential. I’ve also witnessed CISOs debate whether securing AI is actually different from securing traditional deterministic systems applications ad nauseam. While the controls of securing AI look similar to those of other systems in regard to how and what to secure, this knowledge gap leaves CISOs in the dark about where they ought to secure it. 

Without understanding the difference between testing data and validation data or a feature catalog and a model catalog, only having high-level guidance available, coupled with a daunting new paradigm, we are seeing a pattern of some CISOs burying their heads in the sand. This is expected — humans are wired to fear the unknown — but it doesn’t have to be this way. 

Needed: A holistic technical framework for AI security 

The AI Security Framework seeks to provide the necessary guidance to understand those core questions that inform CISOs’ teams’ security work. It outlines the 12 foundational components of a data-centric AI and ML system and maps out a total of 55 technical security risks across those components. 

Image courtesy of Khawaja

For any given situation, only a few risks actually apply. The above diagram shows what those risks are in context — and by focusing on a subset of risks, the universe of controls required to secure an AI system becomes more feasible. 

The four foundational subsystems of AI, per the framework, are: 

  1. Data operations, which include ingesting and transforming data and ensuring data security and governance. 
  2. Model operations, which include building custom models, acquiring models from a model marketplace, or using SaaS LLMs (like OpenAI). 
  3. Model deployment and serving, which consist of securely building model images, isolating and securely serving models, automated scaling, rate limiting, and monitoring deployed models.
  4. Operations and platform, which include platform vulnerability management and patching, model isolation and controls to the system. 

From there, one can visualize and understand the corresponding risks to each foundational component. For example, model operations encompass 13 potential risks, including model theft, model inversion and model asset leak. 

This approach helps tackle companies’ varying needs when it comes to securing AI. Just like preparing a building for natural disasters as part of the process of securing it — are you in a region that gets earthquakes, or are you more concerned about hurricanes? — CISOs must prioritize some components of their AI system over others. To go back to the example about model operations, a company that has invested heavily in building a proprietary model will be more concerned with model theft and protecting the model itself; one that has a plethora of valuable training data might be most concerned with safeguarding their raw training data. 

Guardrails to adapt to a changing landscape 

Importantly, when adopting any framework, ensuring it is dynamic and can change alongside the landscape is paramount; as regulations crystallize around how enterprises use or build AI, threats and vulnerabilities will, too. Frameworks like ours, those of our peers, and those from governing organizations can help CISOs and other leaders pinpoint specific focus areas as the regulatory environment shifts. There may be a time when “doing generative AI” looks differently in California than New York, or in the US versus Europe. CISOs need to be adequately equipped to respond to changing regulations on a dime, and by breaking down the system and looking at risks on a holistic level, CISOs can better understand what system components and risks. 

There’s a reason why there are so many uncertainties around security in AI: it’s hard, whether that’s due to moving parts, systems that feel unfamiliar or the many gray areas in regulations and compliance today. Through a thorough understanding of which components make up AI, how those components work together and what distinct risks AI introduces, CISOs can better understand which components, threats and risks are the highest priority to their organization and ensure those controls are in place. CISOs deserve to say “yes” to innovation — and by using robust and dynamic frameworks to illuminate a path forward, CISOs can securely shepherd their organizations’ data and AI journeys — even as the landscape evolves. 



Source link

Leave a Comment