- This Samsung phone is the model most people should buy (and it's not a flagship)
- The 50+ best Black Friday Walmart deals 2024: Early sales live now
- How to Dockerize WordPress | Docker
- The smartwatch with the best battery life I've tested is also one of the cheapest
- One of the most immersive portable speakers I've tested is not made by Sony or Bose
No Integrity, No Trust. The Foundation of Zero Trust Architecture
In the episode, Tripwire’s Maurice Uenuma discusses the role of integrity when it comes to zero trust architecture. With results from our latest research survey on The White House’s Executive Order and zero trust, he and Tim make the case that zero trust cannot be maintained without proper Integrity controls at its foundation.
Spotify: https://open.spotify.com/show/5UDKiGLlzxhiGnd6FtvEnm
Stitcher: https://www.stitcher.com/podcast/the-tripwire-cybersecurity-podcast
RSS: https://tripwire.libsyn.com/rss
YouTube: https://www.youtube.com/playlist?list=PLgTfY3TXF9YKE9pUKp57pGSTaapTLpvC3
Breaches and cybersecurity incidents are making headlines every day. What are you doing to be prepared? One way to protect an organization is by using a zero trust architecture. Another way is to use integrity monitoring. Maurice Uenuma, vice president of federal and enterprise at Tripwire, possesses a wealth of knowledge on these two topics. He shared some insights on how to make these approaches a reality for your security program.
Tim Erlin: While it would be hard to miss this topic in the industry today, we should probably start with a brief discussion of what zero trust architecture means.
Maurice Uenuma: There are several different definitions out there, but the one most relevant for me is that zero trust is a set of design principles, an overarching strategy for security that eliminates implicit trust and that shifts the burden of assessing and validating the trustworthiness of individuals or devices onto a per-session basis based on what they’re trying to accomplish in that moment. There is not an assumption that just because they have logged into a particular enterprise environment with valid credentials that they now can be trusted throughout the entire time that they’re in the environment. It really segments it down into brief sessions focused on whatever specific service or target set of data that is being accessed.
TE: Yes, and it’s helpful here to point out the antithesis of zero trust as an illustration of what it is. This idea that you authenticate and then you’re implicitly trusted to all resources, data, and applications inside some kind of perimeter that is granted through that initial login… that’s the way things have worked in the past. Zero trust presents an alternative set of principles to that process.
MU: I think that’s a great way to describe it. By specifically zeroing in on the perimeter, zero trust acknowledges the reality of far more distributed environments that do not have any defined secure perimeter. It also acknowledges the reality that, in many cases, attacks – particularly if they are launched by determined attackers who are well-resourced and who have a great deal of time – sooner or later may be able to get in. So, we assume that we can’t just keep them out.
TE: Yeah. One of the tricky things about the term “zero trust architecture” is that the word “architecture” in a lot of cases implies a very well-defined definite object. But as you point out, zero trust is really a set of principles. In many cases, it’s implemented partially in different ways. There are different documents out in the world that describe zero trust in different terms, as well. So, it’s not that there’s a perfect ideal of a zero trust architecture. It’s more of the general direction in which all organizations that are trying to achieve some kind of zero trust are marching.
MU: That’s a great way to describe it.
TE: Moving away from that definition of zero trust for a second, the other topic I wanted to introduce in this conversation is the term “integrity monitoring,” which is a big part of Tripwire’s history. You often do a great job of articulating what integrity monitoring means, so I want to give you a chance to put that out in the world here, as well.
MU: Thanks, Tim. Within the context of security, integrity monitoring can be thought of in two ways. One, it is a broad organizing concept and kind of mindset, if you will. And then secondly, it is a set of specific or more specific technical security controls like file integrity monitoring and secure configuration management and so forth. It’s very important to understand, first and foremost, the conceptual significance of integrity.
If we think about big concepts and security, there are certainly some very familiar ones. For example, risk management is understood as a discipline, as an approach to mitigating risks by acknowledging and trying to anticipate known and unknown risks, and as a means of reducing vulnerabilities. We at Tripwire have developed different ways to deal with risk. But another concept is this idea of trying to understand the threat and deal with a threat.
Threat intelligence is one way to try to reduce the uncertainty and increase our knowledge about threats so that we can then respond in a targeted or customized fashion to what we understand them to be. There are also other ideas such as deterrence. Having an understanding of the concept, explicitly and implicitly, is very, very important. To tie these thoughts of integrity and deterrence together, we would argue, is fundamentally about ensuring and maintaining known good states. It’s about ensuring that we are exercising influence and control over ourselves in a way that ensures that – regardless of whether we know the threats or not, regardless of whether we’re able to reduce all vulnerabilities, and regardless of whether we’re able to affect potential adversary behavior through deterrence – we can maintain a state of goodness. And that if we do, we’ve actually accomplished a great deal from a security standpoint.
TE: It’s really interesting the way you describe integrity as ensuring a known good state. It’s expansive. We often interchangeably use the phrases “integrity monitoring” and “file integrity monitoring,” which are common, well-understood capabilities. But this idea of taking a step back and understanding the two as components of maintaining a known good state implies other capabilities. You have to know what a good state is, and that is not necessarily a static entity, either. So, that threat environment piece becomes really relevant. The idea that the state of your asset may change because of a change in the external threat environment is a concept that doesn’t often get married with integrity monitoring, but it really is part of it. When you think about it, Maurice, can you talk a little bit about what you see as the difference between change detection and integrity monitoring?
MU: Yeah, certainly. Change detection is a very specific action that we take that alerts us to the fact that something has changed, but it provides little to no context as to whether that change is expected or unexpected, authorized or unauthorized, or who made the change. From a security standpoint, it’s really the context of that change detection that is very important. So, if we’re talking about file integrity monitoring (FIM), for example, in order to do it well from a security standpoint, we have to both capture all the changes out there and then deal with the inherent challenge that arises right away, which is that there is now a greater noise-to-signal ratio that we have to deal with. Narrow down the focus to those changes that matter is the challenge
TE: I want to start trying to stitch these topics together into a cohesive narrative, if you will. So, we talked a little bit about zero trust architecture. We talked about integrity monitoring and the difference between change detection, integrity monitoring, and the value of really what I would call the context of knowing what a trusted state is for a given asset in integrity monitoring. We can see how these things are all related, that this idea of understanding what a trusted state is and being able to measure that trusted state and monitor it for changes is really a concept that’s foundational – has to be foundational – to a successful zero trust architecture. Does that make sense, too, Maurice?
MU: It absolutely does, and I would take it a step further, actually, and suggest that zero trust assumes that integrity – as a broad concept – is a key principle in the sense. There is a requirement for continuous re-validation of trustworthiness to the extent that systems are in a trustworthy state and that the human beings connecting into a particular environment or into a session are trustworthy. In addition to that general concept, there are many other security controls that a zero trust architecture calls for that are needed to maintain its integrity for the whole thing to work.
TE: That’s right. Something that I don’t see talked about a lot, which is vitally important, is what is required to actually maintain the integrity of the architectural components of a zero trust architecture. There are a lot of conversations about how you authenticate successfully, how you determine the trustworthiness of a request, or whether it’s an individual or a device, but how you maintain the trustworthiness of the systems involved in the architecture itself doesn’t seem to be a big topic of discussion. It feels to me like it’s really missing.
MU: It does seem that way. For a long time, we have been purveyors of integrity and integrity solutions. So, naturally, we’re more sensitized to looking for where integrity shows up or where it may not. But, I think it’s fair in any sort of objective analysis, that in order to trust a particular device that is connecting into a given system for a particular session, the secure state of that device is a very important factor in addition to properly credentialing itself. It may properly authenticate as a device that has some degree of trust by virtue of it being an enterprise-maintained, enterprise-issued device. But do we know that the system it is connecting to is actually in a hardened state? Do we know whether actions have been taken on that device, changes made that would then take that system either out of compliance with that desired standard or introduce some other risk or indication of compromise that we would want to know about?
TE: Excellent point. I want to introduce a little bit of data into this conversation because you’re quite right. At Tripwire, we have a hammer, and we’re looking for nails. That’s the reality of being a vendor in this space, but there is some data from a recent survey we conducted that talks about the relationship between integrity monitoring and zero trust. The two data points I think are relevant to this conversation are as follows. First, we asked the question of how important integrity monitoring is to a successful zero trust strategy. Half of respondents said it’s foundational, and 43% said it’s somewhat important. Te rest indicated lesser degrees of importance.
The second question we asked was, “Which of the following statements do you consider to be core tenets of zero trust?” The tricky part about this question was that the list of available answers were exactly the tenets of zero trust from the NIST definition of zero trust. So, the respondents should have chosen all of the statements, and “all of the above” was certainly a choice. One of the choices was that “the enterprise monitors and measures the integrity and security posture of all owned and associated assets.” Surprisingly, only 22% of the respondents said that they thought that was a core principle of zero trust. So, I was shocked by both of these responses, that so many people felt that zero trust was foundational. I don’t see it as part of the conversation today. And then, I was also surprised that so few people said that that option was a core tenet. So Maurice, what’s your reaction to that set of data?
MU: That’s a tricky one to untangle. I would suggest that it has to do with the very fact that practitioners tend to be people who have a great deal of technical expertise in IT functions or security domains. They may tend to focus on the technical implementation of a particular architecture, looking for specific security controls that have been explicitly outlined. So, on one hand, conceptually, they may view that question as an abstract one. They’re going to say, “Yes, integrity is important,” because they are familiar with the CIA triad – Confidentiality, Integrity, and Availability. But then, as soon as we start asking about how this actually gets implemented in practice, they may not think about integrity as a more specific control right away.
TE: That’s an interesting point. If we think about how the concept of integrity monitoring seems very reasonable, it seems foundational to zero trust, but the ability to implement is difficult. How do we, as an industry, start to bridge that gap. If we’re building zero trust architectures without adequately accounting for something that’s foundational to success, we’re going to see a lot of failures ultimately, right?
MU: That’s a tough question. It’s one of those tough questions that persist in our field. Kind of like, “How do we get users to exercise more secure behavior?” We’ve known about the problem for decades. We struggled with it, and we continue to struggle with it. But, I think the answer is similar to the answer for how we successfully implement foundational controls. Whether we’re looking at the CIS Controls, the NIST Cybersecurity Framework, or any of the other commonly accepted frameworks for cybersecurity, we need to use the best practices that are built upon a great deal of data and an expert insight so that we can trust that they are legitimate. The question is not, “Is the thing to do magical, unique, or difficult?” The real question is, “How do we do it consistently at scale and reach the parts of the environment that need to be reached?”
TE: Your metaphor makes sense to me, except that I’m super sensitive about focusing so much on perfection that it makes the task seemingly impossible in these scenarios. It’s like with ransomware. We get ourselves in this position where we implicitly believe that a successful ransomware attack is inevitable, so we stop focusing on preventative controls, and we move on to just incident response. Wouldn’t you agree that you can make incremental improvements in prevention that reduces the chance that you’re going to suffer a successful ransomware attack?
MU: I think that’s a great point, and there is a way to arrive at a reconciliation of these concepts. Let me attempt to frame it this way. At Tripwire, we’ve been in this business a long time, so we have interacted with thousands of customers in the federal space, in local government, and across a variety of other industries. We often see this tendency toward chasing the newest technology solutions. There’s a tendency for a customer to think that there must be a different tool out there that can maybe make the problems go away. As we see new security technologies emerge, there’s an excitement and an anticipation that maybe that can help, but we’ve also seen in many, many cases where organizations will suffer a breach and then realize it’s not about the shiny new thing.
It’s about actually extending some basic controls that they may have on their critical systems but further out. So I wouldn’t disagree with what you’re suggesting. A great deal of gain can be made from improving from say 80% to 85% or 85% to 90%. Typically, there is a trade-off for the organization in terms of where they invest their time efforts and resources between looking for a net new architectural or control or tool versus extending something that is working at an 80% level and trying to improve that. What we consistently find is that going back to the basics and doing that well is what matters most.
TE: In many cases, it seems to be easier to get funding for a large new project, potentially a capital expenditure, than incremental funding to expand something that is very successful. When it’s successful, it isn’t top of mind. That budgeting aspect is something that we often forget about when we’re looking at the latest, greatest technology and getting excited about whatever promises it can make. This is a good conversation for us to have. We’ve surfaced integrity monitoring as a core component in zero trust, and hopefully it spreads a little bit. Other people are going to have this conversation about how integrity is foundational to successful zero trust. Maurice, what would you like to see change in the industry around this zero trust conversation? Because it’s a big conversation with an Executive Order driving it. We’re going to see federal adoption and then kind of the waterfall from that into other commercial spaces. What would you like to see change about that conversation as we move forward?
MU: One way to up-level the conversation is to more explicitly state and underscore the underlying assumptions that go into it. We know that the concept of “deny by default” or removing implicit trust is a core principle of zero trust. But, the conversation that we’ve had so far helps to elucidate one of the other principles that underlie a zero trust. When we look at the conversations around zero trust, we tend to bounce between the conceptual and then the much more specific practical security controls. However, being able to much more quickly state what the building blocks are conceptually for zero trust may help clarify a number of things about security strategy in general. That that is one thing that we could do a better job of, and hopefully this brief conversation contributes in some way to that. Of course, continuing the work that’s already been begun by NIST, CISA, and other government agencies to specifically articulate technical controls should be a part of the zero trust architecture along with organizing them into maturity models.
TE: I’ve spent some time with the draft documents of the CISA zero trust maturity model and the Office of Management and Budget’s draft memorandum. I don’t think they’re perfect by any means, but they have the potential to really drive material change, and that’s something that we shouldn’t hink twice about. It’s highly valuable.
I want to thank you for spending time with us. It’s an interesting conversation. Thank you, Maurice.
MU: Thank you, Tim. Great to talk with you.