- The best foldable phones of 2024: Expert tested and reviewed
- Redefining customer experience: How AI is revolutionizing Mastercard
- The Apple Pencil Pro has dropped down to $92 on Amazon ahead of Black Friday
- This tiny USB-C accessory has a game-changing magnetic feature (and it's 30% off)
- Schneider Electric ousts CEO over strategic differences
The Future of Cybersecurity? Just One Word: Automation
By Dr. Peter Stephenson
If you are not better informed, smarter, better equipped, and faster than the adversary, you can count on your system being compromised at some point. When I’m asked about the future of cybersecurity, I generally recount a cautionary tale. As far as I know, this is never actually happened. But it brings into focus two of the most important concepts in cyber adversary threats: autonomous bots and blockchain.
Imagine the following scenario: it’s late on a Friday evening starting a long weekend. There is a single engineer in the network operations center and a single engineer in the security operation center. Everything is quiet until the network engineer notices thousands of accounts logging in and removing money using the online banking system. At the same time, the security engineer notices the logins but sees nothing irregular about them except for their volume. The network engineer is concerned, and she disconnects the remote banking system from the Internet. At that point, the security engineer notices that the attempts at removing money from accounts continue from inside the network but because the network is not connected to the Internet the attempts fail.
Neither the network engineer nor the security engineer can explain the sudden removal of money from so many accounts. Further investigation shows that there were several million dollars removed from a few thousand accounts in the space of fewer than five minutes. The security engineer notifies the forensic team, and they began to try to figure out what happened. Unusually there is absolutely no indication of a breach. However, late on a Friday night is not when one would expect millions of dollars to be removed legitimately from several thousand accounts at the same time. The engineers and forensic specialists can offer no explanation.
Here’s what happened. Over the space of several months, an autonomous bot from a hive net slowly accessed the protected network multiple times. The single bot was released, through phishing, into the network. That bot slowly sent account credentials to port 443 (HTTPS) via a blockchain network where they were saved. Once enough credentials were harvested, the bot destroyed itself leaving no trace. Because it was connected to port 443, the exfiltration was not noticed but was considered normal network operation. It set off no alarms in the intrusion detection system.
The intrusion detection system was a next-generation system using machine learning. However, prior to penetrating the network, the hive net attacked the network multiple times in multiple ways collecting the intrusion detection system’s responses. From those responses, the hive crafted attacks that would not trigger the intrusion detection system. This type of machine learning black box attack is called “querying the oracle”. From the information gained, the first bot was able to enter the network as part of a phishing campaign. The second set of attacks, triggered inside the protected network, allowed the bot to query the oracle internally. The hive now had all the information it needed to complete the attack.
Having gathered the defense information, the hive now could exfiltrate money from accounts without being detected. On a Friday evening the hive, using its swarm-bots, performed a smash-and-grab attack. Spoofing legitimate user accounts, the swarm logged in and transferred money out via the blockchain network. Each bot destroyed itself after performing its mission. The blockchain network terminated in a bitcoin wallet. Money in the bitcoin wallet immediately was transferred to several additional bitcoin wallets, obfuscating the trail. The money was never recovered.
This is an example of an attack by autonomous bots. In other words, the bots do not report to a hive master or a botmaster. Unlike current generation attacks, the hive master simply needs to give the hive its objective and let the hive operate autonomously. The bots learn from each other and the intelligence of the hive grows.
In current generation attacks, the botmaster manages a command-and-control server. From there he directs the bots to attack. Autonomous bots, however, receive their initial programming and receive initial commands from the hive. The hive and the bots are based upon machine learning or other forms of artificial intelligence and do not require human intervention once they’re programmed and their objective is defined.
So how do we defend against autonomous hives and swarm-bots? The only answer is that we must deploy machine learning models that learn from attacks against them – in addition to known attacks – and develop defenses on the fly. That means we must be smarter, faster, and more alert than current-generation tools are. What does that really mean? It means that in the future humans will not be fast enough to respond. In fact, for certain types of current, distributed attacks humans are not fast enough to respond. Lest you interpret this as “there is no place for humans in cybersecurity”, let me state clearly that you are about half right.
Humans always will make hard analytical decisions. To turn over all cybersecurity to an algorithm would eviscerate human control and open the way to errors and bias in the machine learning (ML) code. However, there are certain functions that depend upon rapid response – often at wire speeds – that preclude human intervention until the event is interdicted and it’s time for after-action analysis. Then, using analytical tools, humans enter the picture and make decisions that they are added to the training set. In addition, ML systems often add events to their training set on their own.
Here’s the point… cybersecurity in the future must become a partnership between people and machines. There are things that the adversary will do with ML that a human can’t hope to recognize and interdict in a timely manner. But there also are things on which the human and the machine can – and must – collaborate. The old saw that computers do only what their human programmers are telling them to do. Today – and, certainly, tomorrow – machines will learn to program machines with little to no human interaction. While it may be true that there is a human at the start of this chain, it also is true that at some point the human contribution is minimized to the point of obscurity. That is, potentially, a dangerous time for cybersecurity.
Imagine, for example, a hivenet created by an especially talented hacker with malicious intent. The hive wanders through the Internet achieving its mission as assigned by its hacker hive master. But all the time it’s doing the human’s bidding, it is learning and training the swarm bots’ ML. At what point – if any – do the swarm-bots and the hive thumb their virtual noses at the human and go their own way? Does this mean that the future of cybersecurity is an endless battle of the bots with the bots becoming ever-more sentient? That is a debate for cyber philosophers, not security professionals. But – and this is a big but – what would we do if that became the case?
About the Author
Dr. Peter Stephenson has reactivated himself to exclusively focus on deep next-generation Infosecurity product analysis for Cyber Defense Magazine after more than 50 years of active consulting and teaching. His research is in cyber-legal practice and cyber threat/intelligence analysis on large-scale computer networks such as the Internet. Dr. Stephenson was technology editor for several years for SC Magazine, for which he wrote for over 25 years. He is enabled in his research by an extensive personal research laboratory as well as a multi-alias presence in the Dark Web. He has lectured extensively on digital investigation and security and has written, edited, or contributed to over 20 books as well as several hundred articles and peer-reviewed papers in major national and international trade, technical and scientific publications. He spent ten years as a professor at Norwich University teaching digital forensics, cyber law, and information security. He retired from the university as an Associate Professor in 2015. Dr. Stephenson obtained his Ph.D. at Oxford Brookes University, Oxford, England where his research was in the structured investigation of digital incidents in complex computing environments. He holds a Master of Arts degree in diplomacy with a concentration in terrorism from Norwich University in Vermont. Dr. Stephenson is a full member, ex officio board member, and CISO of the Vidocq Society (http://www.vidocq.org). He is a member of the Albany, NY chapter of InfraGard. He held – but has retired from – the CCFP, CISM, FICAF, and FAAFS designations as well as currently holding the CISSP (ret) designation.