Human Factors: Why Technology Alone Will Never Equal Cyber Secure


In this episode, Kai Roer, Chief Research Officer at KnowBe4, explains how human factors will always play a role in how secure our technology is.

Spotify: https://open.spotify.com/show/5UDKiGLlzxhiGnd6FtvEnm
Stitcher: https://www.stitcher.com/podcast/the-tripwire-cybersecurity-podcast
RSS: https://tripwire.libsyn.com/rss
YouTube: https://www.youtube.com/playlist?list=PLgTfY3TXF9YKE9pUKp57pGSTaapTLpvC3

Cybersecurity is a vast professional field.  There is so much technology that can protect all the systems that we use on a daily basis.  All of these systems can help us to remain secure, yet, no matter how many systems are in place, we have to remember the human element as well. 

Security awareness trainings are oftentimes seen as a burden, requiring a person to attend educational sessions.  The best way to imbue security awareness in an organization is to make security part of the culture.  This transforms awareness to a personal code of conduct.  But, can security awareness success be measured?  Is there evidence to prove that security is, or isn’t working in an organization?

I recently had the opportunity to speak with Kai Roer, Chief Research Officer at KnowBe4, which is a leading security awareness company for organizations of any size.  Kai and his team have created the Security Culture Framework, and have written a book that is scheduled for release in April.  The information contained in the book can help broaden security by offering evidence-based measurements for advancing the security maturity in any organization.

Tim Erlin: Thanks for taking the time to speak with me today

Kai Roer: Thank you so much, Tim.  It is a pleasure to be here.

TE: My discussions often tend towards the more technical topics in cybersecurity.  Cybersecurity as a whole tends to focus on technology, but there are a lot of people involved in cyber security as well, both as practitioners, and people out in the world who have to deal with cyber security on a daily basis. So, I want to take this conversation a little bit more towards that people aspect, but I want to start with a question that occurs to me, which is, why can’t we solve cybersecurity problems in the world with technology alone?  Why is it, from your perspective, that technology isn’t sufficient for us to address cybersecurity?

KR: The challenge here is that the technology that we use today is created by humans, and humans are, dare I say, flawed. And, by flawed, I mean that we come with biases, mental concepts, and ways we are working that are not really made to be perfect. And because of that, whatever it is that we create will never be perfect either.

TE: Yes, it is understandable that the technology itself can never be perfect. I think there is also more to it than that. There has to be, because when technology is interacting only with other technology, those interactions can be enforced through the technology itself. But we don’t do business that way, and we don’t deliver services that way, because ultimately we’re servicing humans, and there are human beings on the other end of that interaction.  With technology as the between human interactions, you can’t remove the humans from the process.  So, technology is ultimately limited by that that fact that there are people in process involved.  Is that a reasonable way to look at it?

KR: Yes, of course.  We cybersecurity professionals have always spoken of people, process, and technology. It’s very important to remember this, especially if you are like myself, very focused on the people side, or if you’re a compliance officer, you tend to be more on the process side. And, of course, technologists tend to focus more on technology. One thing that I believe that we sometimes forget is that these three aspects are together. So, if you change one of them, you will automatically influence one of the others.

One example that comes to mind is the evolution of seat belts in cars.  This is a safety feature right now,

but when they were first introduced, the problem was that very few people actually used them.  There were varying reasons for that, such as that people didn’t really care, or it wasn’t cool, or we didn’t really understand how to use it. Then, some years later, the governments decided that they had to reduce traffic fatalities, and they needed a way to do that.  At that time making seat belts mandatory was a simple method.  Two things happened as a result.  First, car manufacturers started to put seatbelts in their cars.  Seatbelts were not only an additional feature, but a standard installment; every care had them.  Second, people considered using them, but not everyone did so. So, even if you have a policy in place, such as a mandatory government-based initiative, and the technology in place, people still need incentives.

Just having the rule and the technology available is good, but if I am not motivated into actually using it, or following whatever rule is in place, I still may not do it. When it comes to seat belts, we didn’t use it until such a time that the policy was enforced as a law. The police started to pull people over, and issuing hefty fines for failure to wear a seatbelt. That sparked a huge change in behavior.

TE: It’s interesting to see the progression of the change behavior for people who lived through that process.  They had to go through it from a legislative and policing aspect, but people who have grown up with the seatbelt being a habit that they’ve always had would probably continue using it regardless of the law, for the most part. It has created a habit there, which is an interesting way to think about it.

KR: Yeah. And, a mental concept. Right. I still remember back in a day when it was cool to drive your car, not using the seatbelt, because you were young and tough and invulnerable.

TE: Yes, a managed decision.

KR: Exactly. But now, people don’t even think about it. It’s just an automatic habit.

TE: Is there a corollary in cybersecurity that fits there? I don’t know if there is, because the technology changes fairly quickly.

KR: Well, passwords, for example. That is a topic that most people dislike. One of the challenges we have as an industry is that we give advice one year, and next year there is a change to that advice. Sometimes, even the opposite advice! I remember that you should never use the same password, and then you should use the same password. You should never write them down, then you should write them down. They should be short and simple so you remember them.  No, they need to be extremely complex and no one can guess them. And even today, people wonder how they are supposed to handle passwords.

TE: An interesting memorandum recently was put out by the US government, directing government agencies to implement some pieces of zero trust, as they move towards broader implementation.  One of the items in there is that they should not enforce password rotation methods. So, effectively, the advice that we’ve always given out about how you should never reuse a password, and that you should frequently change them is now being removed. This is something that for so many years has been enforced technically. That guidance certainly does change.

KR: Yeah. And this is also another perspective that you have there.  We can use technology to enforce a policy, or a certain behavior in this case, then remove some of the constraints that we have deliberately put into the technology. This is really interesting when it comes to human behaviors.  How can we influence people do the right thing? We pointed to the policing, and having the rules and technology in place, and then making sure that people actually follow them. That’s one side of it. The other one is using technology and making the technology force our behavior towards what is best.  This is another thing that I believe that we should be much, much better at.

There are number of reasons for it. As humans, we come with all kinds of flaws. For example, if I am concentrating on one thing, it may be very difficult to do something or focus on something else at that same time.  In the cybersecurity realm, this distraction is exactly what may make me open an attachment or click on a link, or reuse a password, because my mind is not there.  However, if we can use then the technology to refocus my attention to recognize the problem, then we have significantly reduced the risk.

TE: That’s really fascinating to think about, because so often we consider technical controls as enforcing a policy or enforcing a particular requirement from a standard, but really what technical controls do is impact human behavior in some way. If we thought more about how they impact human behavior, we might actually design them differently to get the outcome that we ultimately want. It sounds like that’s what you’re saying.

KR: Exactly. This is where I believe that we, as an industry, especially the security industry, but also the computer industry can learn something from science. A couple of my heroes, Richard Thaler and Cass Sundstein have done a lot of research in this area and also published a number of books, including Nudge.  One of their key messages is that we should embrace the fact that humans are wired the way we are by supporting behaviors, by making it easy to do the right thing instead of making it difficult. That’s where I believe we can learn a lot, especially those of us who have been in this industry for quite some time.  We have seen that we use technology to enforce some policy, but not necessarily in a way that makes it easy for the person using the technology. Often, instead, the opposite is true. By taking Thaler’s research, for example, and, and learning from that, we, we can adopt the technology and build technology that make it easy for people to do the right thing, even if it seems difficult or, or strange in that moment.

Do you remember when the first iPhone was released? Do you remember what you thought when you saw this big screen and one button only?  I fell in love with it obviously, but I was curious about how anything was possible with just a screen and one button.  What the folks at Apple did was to research how to make it as usable as possible without being hung up on how we previously did things.  This is something that we, as an industry should learn more about.

TE: That philosophy relates nicely to security culture, because organizations, whether they know it or not, they have some kind of security culture. You have spent a significant amount of time figuring out how to measure that, and how to influence it.  How do these, these principles of simplicity and influencing human behavior relate back to security culture for an organization?

KR: Security culture is all about risk management. Understanding what kind of risk is present, and understanding how the human side of the organization is handling risk is part of the security culture. You have a security culture, even if you can’t specifically define it. When we look at this, we need to look into the people, process, and technology, or, more specifically, how to make technology enable people to make the best decisions. What that does is it facilitates a good security culture, something that makes it easy, which influences the opinions of the employees about security, for example.

Organizations should spend more time on understanding the new needs of the employees, not teaching them every single piece of security that we as security professionals seem to believe that everybody else needs to know as thoroughly as we do. Instead, try to figure out what it is, that person or that team, or that function, that role needs from a security perspective, and give them that and remove all the other pain, or make it easy to do the right stuff.

TE: That makes me think about how, when an organization selects a particular implementation of some security control, their selection criteria are usually technical, usually oriented towards the security team, not necessarily oriented towards whether that particular implementation of the control fits the security culture of the organization.

KR: And how can you change it? Or even before that, is this company’s method of organizing the works something that was decided upon because it makes sense to this organization and the risk management of this organization, or did it just happen by accident, which sometimes it does. This is a very good example of one of the perspectives of security culture.

TE: How does the Security Culture Framework fit here? Tell us about that.

KR: The main purpose of the Security Culture Framework is to build and maintain security culture. When we created that, very few organizations focused on security culture. There were probably a handful of people around the world using that term, and most of those would be in academia. The industry was still focusing on antivirus and a lot of technical controls. Very few organizations realized, or wanted to take it a step further when it came to training the employees. We’ve had awareness training for a long time, so some organizations realized that we need to fix this if we are considering the employees part of the organization.  There are risks that the employees need to understand, and be aware of, and trained to avoid.  The cybersecurity industry has been working a lot with technological and policy solutions, but those don’t really work. We needed more. So, that’s the backdrop of why we created a security culture framework.

The framework itself is really simple. It’s four simple steps based the PDCA, plan, do, check, act. You start measuring where you are today. How do you know what kind of culture you have, and what does the culture you want look like? Then, you start involving the organization. Back when we created the framework, we talked a lot about the financial aspects; how do you fund your projects? How do you make a business case? How do you engage with the board and executives, because you need their support? Also, identifying the target audience, which is a marketing term actually.  Who are the employees in your organization? How can you know what they need?  The focus is on the recipient, not you. What is best for the organization?  What kind of activities work best for the organization?  Do we give them trainings? Do we set up lunch and learn sessions?  All those kind of things that we do today, no one really knew back then. And then, of course, you figure out the gaps and you just do it again with the necessary adaptations.

TE: The Security Culture Framework has been around for a number of years, and you’re taking this to the next step with new research. You have a book coming out in April, which is the result of that research. What is that next topic for security culture that you’re working on?

KR: One of the things we are doing now is proposing a security culture maturity model, which is something that I’ve been working with for a number of years.  Given the opportunity of working with data, or evidence, we decided to dig into the numbers to answer the question: What is, or what does, an evidence-based maturity model look like from a security culture perspective?  I have a talented team who understands how to quantify the data to derive meaningful results. We discovered that there are certain behaviors, either employee-level, team-level, or organizational / industry-level, that really can help to identify what maturity your security culture has.

Some of them are bad behaviors. So, basically, as an organization’s security culture matures, you would expect to see a short drop in the risky behaviors, for example, sharing credentials in a phishing email. Other cultural maturity indicators are improvements, for example, you would expect to see, reported incidents of phishing attempts.  Those are the stories that the data tells us.  From my perspective, this is a huge innovation, it’s something that I’m very proud of, and very happy that my team has been able to contribute to this evidence-based maturity model.

TE: So the maturity model presents a causal relationship between specific human behaviors within the organization, and fewer security incidents. Is that a good way to think about it?

KR: Yes.  It’s a human perspective of it, but it’s not only behaviors, it is also knowledge. For example, we measure the kind of knowledge and the level of knowledge of the employees. We examine what we call the security culture survey, or security culture index, which is a survey that organizations use to measure what their security culture looks like.  Then, of course, the higher score, the better their maturity is measured.  We call these the security culture indexes, or indicators.  We are building a library of these indicators to help organizations navigate and identify where they are in their maturity journey, and then based on where they are, of course, how they can improve.

TE: It seems highly valuable, because at this point, I think there are lots of opinions about what indicators might have a truly positive impact on the outcome in terms of security. Having data and evidence to back that up really gives organizations a way to make decisions that can translate to far better outcomes.

KR: “Facts over fiction” is one of my favorite saying these days.

TE: What is the exact title of the book that is being released in April?

KR: The title is The Security Culture Playbook. It’s published by Wiley and is available for pre-order on the Wiley site, and all the other bookseller sites.

TE: Thank you for that, and also for spending time speaking with me today. It was a really fascinating conversation. I appreciate it

KR: Thank you so much for this opportunity to share.



Source link