The role of artificial intelligence in cyber resilience
Cyber resilience — or a lack thereof — is at the forefront of many security professionals’ minds. As the cyber threat landscape continues to evolve, many have looked to artificial intelligence (AI) as the answer. But how can leaders begin implementing AI in their organizations?
Here, we talk with Anneka Gupta, Chief Product Officer of Rubrik, about how AI can impact cyber resilience strategies.
Security magazine: Tell us about your title and background.
Anneka Gupta: I’m the Chief Product Officer of Rubrik, a cybersecurity company on a mission to secure the world’s data and serving more than 6,100 customers across every industry. I’m responsible for driving product innovation, strategy, and roadmap. As a product leader, I am ultimately responsible for our products driving growth for the business which means I need to have my hands in everything from the technology to go-to-market and operations.
I’ve been in the technology industry for almost 15 years and started my career in software engineering and working in almost every function. This gave me an appreciation for the cross-functional efforts required to make products successful. It’s ultimately what led me to product leadership in my previous roles and eventually to Rubrik.
In addition to my current role at Rubrik, I’m a Lecturer at the Stanford University Graduate School of Business and teach a course on product management.
Security: What impacts do you predict AI will have on cyber resilience, for better or worse?
Gupta: AI will continue to evolve to help fortify an organization’s cyber resilience. For example, organizations already leverage AI and machine learning to detect anomalies in data, provide insights that help shorten recovery times, and keep up with the latest patterns of attacks and emerging threats within the evolving cyber threat landscape. The marrying of Generative AI with traditional machine learning will strengthen all of these capabilities which will ultimately lead to fewer false positives in detection and faster and more automated remediation.
Ultimately, AI will enable organizations to be more proactive in their measures to reduce the impact of data destruction or exfiltration as monitoring of increasingly large data footprints becomes more manageable and actionable. AI has the potential to materially change the resources required to respond to the ever evolving threat landscape.
Security: Currently, how are threat actors leveraging AI to power cyberattacks?
Gupta:
Attackers and defenders are increasing their use of AI. From a threat actor perspective, we’re already seeing them use AI to quickly generate malware, automate attacks and strengthen social engineering campaigns. This puts the focus even further on cyber resilience as we know it’s not a matter of if, but when an attacker will penetrate an organization and take destructive actions.
Security: How can security leaders effectively fit AI into their cybersecurity strategies?
Gupta:
While this decision will depend on each security leader’s specific industry, I highly recommend that they talk to their peers in their networks to see how they successfully implement AI within their organizations.
When considering AI implementation, it’s crucial to keep your focus on the return on investment. To achieve the maximum ROI, identify the areas where your organization’s time is most consumed. By introducing AI in these areas, you can streamline processes, enhance efficiency and boost productivity. The value of AI should be evident in the ROI it generates for your organization, making it a sustainable investment.
For example, AI can be used to provide training for security professionals. In addition, AI can also be leveraged to provide context-specific training for the entire organization to help create more awareness on the types of threat vectors that are emerging, as well as how to respond.
Security: Anything else you would like to add?
Gupta:
As security leaders, we need to be thoughtful about how we deploy AI responsibly in our organizations. While there is huge potential for benefit, as with any technology, there is huge potential for exacerbating current challenges whether that’s misinformation, social inequality, fraud or loss of privacy. That doesn’t mean that we should run away from this technology, but it does mean we need to be thoughtful in how we leverage it and mitigate the risks. The first step is transparency and I’d encourage every organization to be radically transparent about how they are leveraging AI and what that means for their employees and customers.