- The best foldable phones of 2024: Expert tested and reviewed
- This tiny USB-C accessory has a game-changing magnetic feature (and it's 30% off)
- Schneider Electric ousts CEO over strategic differences
- Pakistani Hackers Targeted High-Profile Indian Entities
- Election day is here! You can get a 50% off Lyft to the polls - here's how
Zero Trust is hard but worth it
At the end of last year, I heard from a long-time enterprise contact that had a major security concern. The company had installed three layers of security and just completed an audit. It showed that since thIey’d finished their installation they’d had five security incidents, and all of them had originated inside their security perimeter, bypassing most of their protection.
Their question was what they did wrong and how they could fix it.
What this company experienced is far from rare, and the source of their problems and the paths to correction are far from easy.
We tend to think of security as a goal we can achieve with a simple toolkit. Not so. Security is the state you achieve by dealing with all likely threats, and every threat has to be addressed in its own unique way. Problems can come from hackers gaining access to an application or database from the outside, through things like stealing credentials or exploiting weak authentication.
They can also come from exploits, where faults in a program (application, middleware, or operating system) can be used to trigger malicious behavior. Finally, they can come from malware that is introduced in some way. Combinations of the three are increasingly common. Enterprises have tended to focus, as my contact did, on perimeter security as a defense against the first of these security problems. They’ve overlooked, or maybe I should say underthought, the last two.
Fixing those other two problems doesn’t mean abandoning perimeter security, it means addressing all the possible problem sources, and the issues my contact reported offer some insight on rules to sharpen security focus.
Rule One is that building a wall is useless if you keep the gate open. Most companies are way too lax in securing employee devices, and in the majority of my contact’s security incidents, the problem was created by an infected laptop. In their case, work-from-home meant that they extended company VPN access to devices that were not only not secured, but not even inspected. Where possible, work devices shouldn’t be used for private purposes, and vice versa.
Rule Two is learn Latin, or at least the critical Latin phrase “Quis custodiet ipsos custodes”. Freely translated, this means “Who will watch the guards themselves?” Monitoring, management, and even security tools often have privileged access to resources and applications, and in just the last six months we’ve had two major security problems associated with contamination of one of these tools—the SolarWinds breach and Log4j. These issues prove that things we need to run our networks, applications, and data centers can bite us, so we have to pay special attention to them, keeping them updated and watching for maverick behaviors.
Keeping software updated is key to applying both these rules, and unfortunately that’s often a problem for enterprises. Desktop software, particularly with WFH, is always a challenge to update, but a combination of centralized software management and a scheduled review of software versions on home systems can help. For operations tools, don’t be tempted to skip versions in open source tools just because they seem to happen a lot. It’s smart to include a version review of critical operations software as part of your overall program of software management and take a close look at new versions at least every six months.
Even with all of this, it’s unrealistic to assume that an enterprise can anticipate all the possible threats posed by all the possible bad actors. Preventing disease is best, but treating it once symptoms arise is essential, too. The most underused security principle is that preventing bad behavior means understanding good behavior. Whatever the source of a security problem, it almost always means that something is doing something it shouldn’t be. How can we know that? By watching for different patterns of behavior. That’s what Zero Trust, another vastly misused security term, should be all about. Sometimes it is; often it’s not.
What Zero Trust really means
There’s nothing easier than slapping a label on a product or service. If you look at exactly what a zero-trust solution is, you’ll see that we don’t really even have consensus on the meaning of the concept. How can you trust a meaningless or multi-meaning term? What we want from Zero Trust is behavior monitoring and control.
I asked my contact how many applications an average worker could access, and the company wasn’t able to get the answer. How, then, could the company know if the worker, or someone working through the worker’s laptop, was stealing data or contaminating operations? They didn’t know what was permitted, so they couldn’t spot what was unauthorized, and that’s where Zero Trust comes in.
A zero-trust system should assume that there is no implied right of connection to anything. Connection rights are explicit, not permissive, and that’s the property that’s both critical to Zero Trust security and critiical to behavior monitoring and control.
Nobody questions the challenge associated with defining not only the allowed connectivity for each worker, but also the connection requirements for management and operations software, middleware, and more. These difficulties are why enterprises often fail to accept true Zero Trust security and why vendors may make the claim without delivering the needed capabilities. Yes, Zero Trust, is more work, but no, you can’t avoid it and be truly secure.
Even meeting the challenge of defining permitted connectivity doesn’t end the hurt. A Zero Trust system has to recognize and journal attempts to make unauthorized connections. In fact, it’s that feature that makes Zero Trust so important. Almost all of the inside-the-perimeter attacks will explore connectivity and resources looking for something interesting, and in a good Zero Trust system these explorations will be detected and journaled, which alerts the company to the fact that something is wrong. Prompt action can then save the day.
The best way to validate a Zero Trust system some vendor is proposing is to look at how to apply it. It’s good to support a hierarchical framework for assigning connection rights to things, because all workers in accounting, for example, and all accounting software are likely to have common connection permissions.
It’s good to have the journaled exceptions stored in a way that traditional analytics and even AI tools can examine them and look for patterns.
Finally, it’s good if this seems like a bit of work, because products that don’t require much of you are likely to deliver little in return. Creating connection permissions and exception journals are essential to security, so don’t compromise these capabilities just to have an easy time. Security is hard, but recovering from security problems is harder.
Copyright © 2022 IDG Communications, Inc.