- This Samsung phone is the model most people should buy (and it's not a flagship)
- The 50+ best Black Friday Walmart deals 2024: Early sales live now
- How to Dockerize WordPress | Docker
- The smartwatch with the best battery life I've tested is also one of the cheapest
- One of the most immersive portable speakers I've tested is not made by Sony or Bose
Weighing Down Cyberrisk Options: How to Make Objective Cybersecurity Decisions Without Negatively Impacting the Organization’s IT Teams?
By Mike Starr, CEO of Trackd
It’s often paid lip service to (or worse, intentionally neglected), and rarely appreciated, but there’s an operational cost to be paid for security. Security controls create inefficiencies, and those security measures can also introduce operational risk. By way of example, I recently came across an intriguing new anti-malware product that uses behavioral analysis to predict when file encryption is unauthorized, and therefore indicative of a potential malware attack. When it identifies such a scenario, it locks the encrypted files and those with access to them. Although a valuable backstop against perhaps the most common attack today, there is an undeniable operational risk that a false positive could temporarily deny file access to legitimate users, impacting the organization’s productivity. In this case, likely a small price to pay for a critical layer of security, but it’s important to appreciate that the operational cost is finite, and the risk is non-trivial.
Perhaps the most obvious example of the impact of cyber security activities on business operations is the area of vulnerability remediation. In typical organizations, the cyber security team identifies vulnerabilities and passes that information along to the IT team to patch the vulnerable devices, a process that might make sense on paper, but can generate understandable conflict in reality. Those two groups (Security and IT) have markedly different objectives. The cyber security team obviously is responsible for protecting the organization from cyber attack, while IT operators are driven by systems availability and corporate productivity. And, as anyone in IT knows all too well, patches can break stuff. It goes without saying that, although system failures resulting from disruptive patches are much more rare today than, say, 20 years ago, IT operators are understandably apprehensive about playing Russian Roulette with their networks, and by extension, their careers.
There are countless other examples of productivity-impacting security requirements that span the spectrum from annoyance (changing passwords) to policies with serious impacts on productivity (extensive 3rd party screening that can delay hiring critical vendors for months), and all of them are created with good intentions by security professionals with the best interest of the organization – or regulatory compliance – at heart. So how do security teams minimize operational risk and burden while still protecting the organization?
The key to healthy, but not overbearing, cyber security is first a genuine recognition that all security is about managing risk, and that yet more tools and policies are not always a good thing. Security practitioners have to cultivate an appreciation for the impact their policies have on everyone in the organization, and that security is about managing risk, not a futile effort to reduce it to zero. In the case of cyber security, less may just be more.
That appreciation, and the policies and activities that flow from it should start with a recognition that just about all cyber attacks originate from one of three techniques in today’s threat landscape:
- Stolen credentials
- Phishing
- Un-remediated vulnerabilities
This reality should inform the decisions made by the cyber security team. From concept to implementation, the question should be asked constantly: will this policy or product materially reduce the organization’s exposure to an attack initiated by stolen credentials, phishing, or unpatched vulnerabilities? A companion question should add whether or not the new policy/tool will limit the attack’s severity if it’s successful. If the answer is not an obvious yes, the security team should reconsider the approach, especially if it has any discernible impact on operations.
Doctors’ offices and government agencies are legendary for developing forms that require obviously unnecessary – or redundant – information from patients and citizens, the motivation for which it seems is simply because they can, and they’re utterly unconcerned with the experience, time, or frustration of their constituents. We’ve all been in organizations in which it seemed the security team’s policies were similarly developed with a wanton disregard for the experience or operational needs of the organization’s employees. Security teams and healthcare/government form designers would both do well to add this question to their vocabulary:
Is this really necessary?
About the Author
Mike Starr, CEO and Founder of trackd is a cross-functional leader and former NSA engineer with experience building and launching products in new and disruptive markets. He’s built and led teams at Fortinet, OPAQ Networks, IronNet, and the NSA. Mike received his Bachelor’s degree from SUNY Alfred and enjoys nerding out on wine and reading fantasy novels in his free time.
Mike Starr can be reached online [email protected] or on LinkedIn and our company website.