- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- Nile unwraps NaaS security features for enterprise customers
- Even Nvidia's CEO is obsessed with Google's NotebookLM AI tool
- This Kindle accessory seriously improved my reading experience (and it's on sale for Black Friday)
Challenges and opportunities that AI presents CISOs
The artificial intelligence (AI) landscape is constantly shifting. To maintain pace with these evolving technologies, CISOs must prepare for additional complexity in their security strategies.
While AI presents a challenge for CISOs, it also offers an opportunity. Here, we talk with Jadee Hanson, Chief Information Security Officer at Vanta to discuss the risks and benefits of AI.
Security magazine: Tell us about your title and background.
Hanson: I serve as Chief Information Security Officer at trust management provider, Vanta. As Vanta’s CISO, my responsibilities are to protect the organization from cyber threats and data loss. Because Vanta is selling to the security buyer, I also play a role in helping the organization understand the security landscape and buyer nuances.
I’ve been in security for almost two decades. I became interested in technology very early on in high school, where I used to help the tech department build desktops. This eventually sparked my interest in pursuing a degree in information systems, and later a career as a security professional.
My first security job out of college was at Deloitte. There, I did a lot of work on security audits and consulting. I did a lot of pen testing, back when no one knew what pen testing was.
I later joined Target. I was there for over seven years and led a number of security functions. During my time at Target, I was able to oversee the security aspects included with the acquisition Target Pharmacies to CVS Health. After that, I served as Chief Information Security Officer and Chief Information Officer at Code42 where I led the security and technology organizations and served as the technology strategy leader ensuring we purchase the right technology to move the organization forward.
Security magazine: What new challenges and risks do CISOs have to contend with due to the proliferation of AI technologies?
Hanson: The risks largely stem from the fundamental unknowns. One of our customers recently described AI as an “alien-like technology.” I love that description because it’s accurate. AI is something completely new and different.
This is a challenge for security practitioners because we are cautious by nature. We like to understand how things work in order to figure out how to secure them.
I myself am on a continuous fact-finding mission to decode the fundamentals of AI models — especially given how quickly the space is evolving — in effort to understand how they work and how I can properly add security controls.
Here are a few specific risks as I see them today.
- One is training models on protected information. AI systems rely on data. Models contain data that we feed it for training or that it collects as part of their normal functionality. In many cases this data is pulled from public sources, but when private information is used to train models, we need to ensure that the model is isolated.
- A second risk is when companies entrust AI to handle too much too early. We know AI is doing some really great things to create efficiencies throughout many of our everyday activities. That said, we all have seen AI get things wrong. For example, there was a recent case where a chatbot run by a major airline lied to customers about a bereavement policy. The chatbot told the passenger they could retroactively apply for a last-minute travel discount, which was not actually outlined in the airline’s policy. To be clear, I am not saying we shouldn’t use chatbots to help customers with service issues. I am merely saying that having chatbots deal with sensitive issues such as policies might be a bit premature, and we therefore should be prepared for the consequences should the chatbot get something wrong.
- Then there is the use of AI by malicious adversaries. Whenever new technology is released, people are lured by its benefits, while bad actors are thinking of ways to use it in malicious and non-intended ways. We’re already seen this happen with deepfakes, misinformation campaigns, malware and phishing campaigns.
So, as excited as I am about this technology, we need to make sure we fully think through risks and approach the use of the technology in a thoughtful manner.
Security magazine: How can the top risks associated with AI be mitigated?
Hanson: First and foremost, we need to start figuring out how to adopt the tech and do it the right way.
The security teams that are pushing back on AI will only get passed by, so it will be key to partner with business teams so they can adopt AI in a thoughtful way.
As I see it, security teams should start by doing two important things.
First, understand what vendors you use that are leveraging AI in the software within your stack. Make sure you ask questions to understand the specific application of AI to your data. Find out if they are training models on the data you are providing and what that means in terms of further protecting the data.
Second, find out what vendors are using models trained on your sensitive data. This is where things get risky, and you need to understand if this is something you feel comfortable with or not.
The other thing to understand is that AI is like any other new technology and that the main security controls still apply. We need to be thinking about access controls, logging, data classification, etc. — the basic fundamental controls still apply to AI.
Security magazine: What are the benefits that AI technology can provide CISOs?
Hanson: Machine Learning (or ML) has been used in security for years to perform functions like identifying anomalies in logs. This new wave of AI and Large Language Models (LLMs) will take this to a new level.
We’re seeing this degree of acceleration with key security functions within our own AI product, which we use ourselves. For example, we’re seeing greater efficiencies in questionnaire automation, mapping of controls and tests and reporting advances.
And it’s precisely these everyday tasks that we need the most help within security. By further automating these functions, we can stay focused on addressing the higher risks throughout the organization.
Security magazine: Anything else you would like to add?
Hanson: There’s one additional concern security pros are grappling with at the moment: the regulatory risk posed by the rapid development of AI.
To stay ahead of this, businesses can follow the AI Risk Management Framework (RMF) by the National Institute of Standards and Technology (NIST). The AI RMF was created to mitigate risks associated with the design, development, use, and evaluation of AI products.