- Your power bank is lying to you about its capacity - sort of
- Cisco and Tele2 IoT: Co-Innovation Broadens IoT Benefits Across Industries
- Black Friday deal: Save up to $1,100 on this Sony Bravia 7 and Bar 8 bundle at Amazon
- Grab the 55-inch Samsung Odyssey Ark for $1,200 off at Best Buy ahead of Black Friday
- Page Not Found | McAfee Blog
Google Cloud's Nick Godfrey Talks Security, Budget and AI for CISOs
As senior director and global head of the office of the chief information security officer (CISO) at Google Cloud, Nick Godfrey oversees educating employees on cybersecurity as well as handling threat detection and mitigation. We conducted an interview with Godfrey via video call about how CISOs and other tech-focused business leaders can allocate their finite resources, getting buy-in on security from other stakeholders, and the new challenges and opportunities introduced by generative AI. Since Godfrey is based in the United Kingdom, we asked his perspective on UK-specific considerations as well.
How CISOs can allocate resources according to the most likely cybersecurity threats
Megan Crouse: How can CISOs assess the most likely cybersecurity threats their organization may face, as well as considering budget and resourcing?
Nick Godfrey: One of the most important things to think about when determining how to best allocate the finite resources that any CISO has or any organization has is the balance of buying pure-play security products and security services versus thinking about the kind of underlying technology risks that the organization has. In particular, in the case of the organization having legacy technology, the ability to make legacy technology defendable even with security products on top is becoming increasingly hard.
And so the challenge and the trade off are to think about: Do we buy more security products? Do we invest in more security people? Do we buy more security services? Versus: Do we invest in modern infrastructure, which is inherently more defendable?
Response and recovery are key to responding to cyberthreats
Megan Crouse: In terms of prioritizing spending with an IT budget, ransomware and data theft are often discussed. Would you say that those are good to focus on, or should CISOs focus elsewhere, or is it very much dependent on what you have seen in your own organization?
Nick Godfrey: Data theft and ransomware attacks are very common; therefore, you have to, as a CISO, a security team and a CPO, focus on those sorts of things. Ransomware in particular is an interesting risk to try and manage and actually can be quite helpful in terms of framing the way to think about the end-to-end of the security program. It requires you to think through a comprehensive approach to the response and recovery aspects of the security program, and, in particular, your ability to rebuild critical infrastructure to restore data and ultimately to restore services.
Focusing on those things will not only improve your ability to respond to those things specifically, but actually will also improve your ability to manage your IT and your infrastructure because you move to a place where, instead of not understanding your IT and how you’re going to rebuild it, you have the ability to rebuild it. If you have the ability to rebuild your IT and restore your data on a regular basis, that actually creates a situation where it’s a lot easier for you to aggressively vulnerability manage and patch the underlying infrastructure.
Why? Because if you patch it and it breaks, you don’t have to restore it and get it working. So, focusing on the specific nature of ransomware and what it causes you to have to think about actually has a positive effect beyond your ability to manage ransomware.
SEE: A botnet threat in the U.S. targeted critical infrastructure. (TechRepublic)
CISOs need buy-in from other budget decision-makers
Megan Crouse: How should tech professionals and tech executives educate other budget-decision makers on security priorities?
Nick Godfrey: The first thing is you have to find ways to do it holistically. If there is a disconnected conversation on a security budget versus a technology budget, then you can lose an enormous opportunity to have that join-up conversation. You can create conditions where security is talked about as being a percentage of a technology budget, which I don’t think is necessarily very helpful.
Having the CISO and the CPO working together and presenting together to the board on how the combined portfolio of technology projects and security is ultimately improving the technology risk profile, in addition to achieving other commercial goals and business goals, is the right approach. They shouldn’t just think of security spend as security spend; they should think about quite a lot of technology spend as security spend.
The more that we can embed the conversation around security and cybersecurity and technology risk into the other conversations that are always happening at the board, the more that we can make it a mainstream risk and consideration in the same way that the boards think about financial and operational risks. Yes, the chief financial officer will periodically talk through the overall organization’s financial position and risk management, but you’ll also see the CIO in the context of IT and the CISO in the context of security talking about financial aspects of their business.
Security considerations around generative AI
Megan Crouse: One of those major global tech shifts is generative AI. What security considerations around generative AI specifically should companies keep an eye out for today?
Nick Godfrey: At a high level, the way we think about the intersection of security and AI is to put it into three buckets.
The first is the use of AI to defend. How can we build AI into cybersecurity tools and services that improve the fidelity of the analysis or the speed of the analysis?
The second bucket is the use of AI by the attackers to improve their ability to do things that previously needed a lot of human input or manual processes.
The third bucket is: How do organizations think about the problem of securing AI?
When we talk to our customers, the first bucket is something they perceive that security product providers should be figuring out. We are, and others are as well.
The second bucket, in terms of the use of AI by the threat actors, is something that our customers are keeping an eye on, but it isn’t exactly new territory. We’ve always had to evolve our threat profiles to react to whatever’s going on in cyberspace. This is perhaps a slightly different version of that evolution requirement, but it’s still fundamentally something we’ve had to do. You have to extend and modify your threat intelligence capabilities to understand that type of threat, and particularly, you have to adjust your controls.
It is the third bucket – how to think about the use of generative AI inside your company – that is causing quite a lot of in-depth conversations. This bucket gets into a number of different areas. One, in effect, is shadow IT. The use of consumer-grade generative AI is a shadow IT problem in that it creates a situation where the organization is trying to do things with AI and using consumer-grade technology. We very much advocate that CISOs shouldn’t always block consumer AI; there may be situations where you need to, but it’s better to try and figure out what your organization is trying to achieve and try and enable that in the right ways rather than trying to block it all.
But commercial AI gets into interesting areas around data lineage and the provenance of the data in the organization, how that’s been used to train models and who’s responsible for the quality of the data – not the security of it… the quality of it.
Businesses should also ask questions about the overarching governance of AI projects. Which parts of the business are ultimately responsible for the AI? As an example, red teaming an AI platform is quite different to red teaming a purely technical system in that, in addition to doing the technical red teaming, you also need to think through the red teaming of the actual interactions with the LLM (large language model) and the generative AI and how to break it at that level. Actually securing the use of AI seems to be the thing that’s challenging us most in the industry.
International and UK cyberthreats and trends
Megan Crouse: In terms of the U.K., what are the most likely security threats U.K. organizations are facing? And is there any particular advice you would provide to them in regards to budget and planning around security?
Nick Godfrey: I think it is probably pretty consistent with other similar countries. Obviously, there was a degree of political background to certain types of cyberattacks and certain threat actors, but I think if you were to compare the U.K. to the U.S. and Western European countries, I think they’re all seeing similar threats.
Threats are partially directed on political lines, but also a lot of them are opportunistic and based on the infrastructure that any given organization or country is running. I don’t think that in many situations, commercially- or economically-motivated threat actors are necessarily too worried about which particular country they go after. I think they are motivated primarily by the size of the potential reward and the ease with which they might achieve that outcome.