The Evolution of Anomaly Detection and the Importance of Configuration Monitoring in Cybersecurity


Back in 1992, when I was more concerned about my acne breakouts and being selected for the Junior cricket team, a freshman at Purdue University was studying the impact of the 1988 Morris Worm event and how it brought about unwarranted changes on Unix systems as it propagated across the network, resulting in the first Denial of Service (DoS) attack. He quickly realised that it was hard to know when a system was infected and what had changed, and by the time the change was picked up, it was often too late.

File Integrity Monitoring and Configuration Monitoring

This led to him writing the first release of an intrusion detection tool that would look at all changes in a system, good or bad, essentially allowing him to take a snapshot of the current state and compare it to the changed state. Understanding what changed and where would help enable a quick recovery. This brought about the concepts of File Integrity Monitoring (FIM) and Configuration Monitoring.

As a result, Tripwire was born. Gene Kim, the founder of Tripwire, released the invention as an open-source tool, and it rapidly became one of the most widely used security tools for Unix systems, downloaded millions of times. Later, he received his first royalty check for a couple of hundred dollars and, upon further investigation, found out that it came from an auditor who was using the tool for financial auditing. The auditor would take a snapshot of the critical financial reporting applications and systems at the conclusion of an audit. On subsequent audits, the tool would be run again, enabling a comparison of what changed over the past year.

This simple exercise of comparing a value to its original baseline, noticing if something has changed and whether that change is good or bad, is perhaps one of the most powerful methods of detecting anomalies in an information system. Fast-forward 32 years, and we are facing denial of service events that may not have been a result of malicious intent but rather a change or misconfiguration that was overlooked and did not trigger any alarm bells.

Case in point: A large telecommunication company in Australia had its service disrupted for hours with no internet, mobile, or landline access to its customers. To make matters worse, critical services like health and emergency services were severely restricted as well. What we’ve learned so far has indicated an unwarranted change to the configuration of routers as a result of an update to the parent company’s network infrastructure.

Technically, the attack was the result of a reset of the Max-Prefix filter value of the Border Gateway Protocol (BGP). The BGP Max-Prefix filter is a safety mechanism to protect a router from being overwhelmed by too many prefixes. When the number of prefixes from a neighbour device exceeds the configured maximum, the filter can shut down the BGP session resulting in a shutdown. This filter helps prevent routing table explosions, which can lead to network outages. Two things happened that caused the event: there was an unwarranted change as a result of an update, and that change was to a critical configuration setting.

Is there a safeguard that could have prevented this or, at the very least, detected it earlier?

Not All Change is Equal

Let’s have a look at the change first. Directly tied to the misconfiguration is the change that took place. The change on its own would not have raised any alarm bells. After all, it was the result of an approved, regular patch/update from a trusted vendor. But things would have looked different in conjunction with the configuration setting that was impacted. Getting information on both what had changed and whether that change was good or bad is the key here.

Let us also assume that the routers were hardened according to best practices and configured properly. In the event this configuration is tampered with or is changed accidentally, the problem lies, and this seems to be the case in this scenario. Consistent monitoring of the configuration, specifically before and after a change or update, and then comparing it to benchmarks like those offered by NIST or CIS goes a long way in ensuring such incidents don’t happen in the first place.

As an example, enabling Telnet on a server may be an approved change, but is it a good change? That’s where configuration monitoring comes into play.

This is nothing new. While not as highly publicized as phishing or credential theft, misconfiguration still factors prominently as an attack vector. However, understanding if a system is misconfigured is not an easy task; you can easily magnify that problem when misconfiguration and changes have to be monitored across hundreds, if not thousands of systems.

Foundational Security is Still Key

I’ve often seen organizations and teams distracted by the latest new cybersecurity products without having the basic security practices in place. Both Integrity Monitoring and Configuration Monitoring should be part of the foundational and fundamental security controls, controls that organisations must adopt as their bedrock upon which other security measures and controls are built.

Very often, the first indicator of compromise is that something has changed. Understanding the nature of that change, where that change occurred, when, why, and by whom goes a long way in mitigating or even stopping incidents and attacks before they happen. Add to that the ability to correlate the nature of the change (was it good or bad) often by measuring it against an industry regulation, standard, or compliance check, and you are suddenly ensuring information systems are hardened to industry recommendations. This ultimately reduces both the attack surface and the propensity for accidental misconfigurations to have detrimental impacts.

Since those pimple-prone days of my youth, I’ve come to terms with one thing: change is here to stay and is the only constant. Change is also a reliable indicator of when something is about to happen. If we have enough intel, we can understand the potential impact of that change.



Source link