How AI continues to reshape the cybersecurity arsenal

As civilization advances, so does our reliance on an expanding array of devices and technologies. With each passing day, new devices, systems and applications emerge, driving a relentless surge in demand for robust data storage solutions, efficient management systems and user-friendly front-end applications. This rapid pace of technological evolution mirrors the exponential growth of the human population and our insatiable thirst for innovation and convenience.  

From smartphones and wearables to IoT devices and cloud infrastructure, the breadth and complexity of our digital ecosystem continues to expand at an unprecedented rate. This necessitates continuous adaptation and innovation across various verticals, from data management and cybersecurity to software development and user experience design. As we navigate this ever-evolving landscape, the need for scalable, agile and resilient solutions becomes increasingly paramount, ensuring that we can effectively harness the power of technology to address the challenges and opportunities of the modern world.  

With all these areas spreading their poised feet into the digital era of human transformation, the number of vulnerabilities and open doors to bypass the devices to reach the backend servers, manipulate data, exfiltrate information, compromise systems and harness all the critical information spread across the deep and dark web becomes prominent. We all witnessed the recent WazirX breach in which the cryptocurrency exchange lost $230 million in a major attack, the Disney leak revealing the financials, strategic information and PII data of employees and customers and the Tencent breach of 1.4 billion user details. This reimposed the need for cybersecurity leveraging artificial intelligence to generate stronger weapons for defending the ever-under-attack walls of digital systems.  

Let’s talk about strengthening the four major pillars from an attacker’s perspective, as they form the core of any organization’s security.  

1. Source code analysis tools  

Static application security testing (SAST) is one of the most widely used cybersecurity tools worldwide. Yet, a common issue faced with almost all of them (including commercial ones) is a super-high number of false positives. These can be a real time-suck for secops personnel, causing them to invest time and energy into researching the fixes of those so-called critical bugs which may just be ‘low’ or ‘informational’ in many cases. This is primarily due to factors such as:  

Lack of real-life data  

The source code of most organizations is proprietary, and the tool itself is not allowed to collect any insights from it. Insights can be particularly useful, like which code snippet was falsely marked as vulnerable or which vulnerabilities were missed. The absence of real-life scenarios doesn’t let the tool evolve.  

Limited support of languages  

While programming languages keep evolving with new versions, upgrades and extensions, it is difficult for the OEMs of SAST to keep up with such progress. There is therefore a very limited number of languages supported, with even lower support for evolving packages.  

Non-curated solutions  

The most challenging but lucrative feature of a SAST can be to evolve as per the patterns of an organization’s code. Every organization follows some coding practices and guidelines. Also, most of them have a set of secrets, variables and redundant strings in the code. Having a SAST tool that identifies the common pattern of bugs in developer code and curates (let’s say) training sessions, or (even better) looks out for those vulnerabilities more thoroughly and with stricter rule sets, can very well prove to be a game-changer.  

With generative artificial intelligence (genAI) entering the arena, many practical applications, which seemed like a distant dream just a couple of years ago, are taking shape. SAST is no different. Many organizations have internally acknowledged the challenges listed above and started to integrate supervised learning models with their offerings. One such company is a large Indian bank with more than 5 million customers who was repeatedly getting half a million issues in code despite adjustments and tweaking of configurations in a popular commercial SAST tool. They rectified the issues as they started to train a model that detects false positives in secrets detection, looks for workarounds, better understands the API integrations and ultimately reduced the false positives by 40% in two months. This drastically reduced the man-hours which were being spent in verification. 

With a powerfully integrated AI model, the modern SAST can be expected to have: 

  • Company-specific rule sets and secrets detection. The model gets more refined and produces fewer false positives as it is used over time.
  • Inclusion of further programming languages, with the ability to be trained by developers of each organization with minimal effort.
  • Submission of insights learned from every model installed in every organization, getting better every day without collecting the proprietary code. 

2. Automated application scanning tools  

Again, a wide set of pen testing tools fall under this umbrella (both open source and commercial). These are sophisticated tools because of the number of tasks they need to execute and based on the technologies with which they need to be compatible for fluent running. Some of the best-automated security scanners have millions of lines of code and are always under development, bug fixes and compatibility updates since they need to match with ever-evolving technologies, platform advancements, language adaptations and security guidelines. 

Modern-day automated application security scanners can perform logins, record macros, request throttles based on server responses, identify vulnerabilities and exploit them via hundreds of different techniques. Yet, even if we run the same tool on 100 different applications, the tool hardly ‘learns’ from each test!  

This is where AI is going to create an impact. With each application tested, the model will be made to learn the mistakes developers are repeatedly making, the capability to bypass CAPTCHAs/firewalls, reduce noise by eliminating impossible test cases as per the environment, tailor the payloads to suit the environment and most importantly, learn from every assignment about what was a false positive and what was missed. Further, we can train the model to generate graphs and pointers for management to look at the most common vulnerabilities, and their impact based on the severities and financial impact to the organization.  

This massive shift in the dynamic application security testing (DAST) sector of cybersecurity, while evolving the way the current tools work and generate reports, can change the complete lifecycle of development, create a coding practice viable for all to adapt for adequate security for the organization and evolve with it to make all advancements secure.  

3. Red teaming weaponry  

Red teaming in cybersecurity represents a dynamic and comprehensive approach to assessing and enhancing an organization’s security resilience. It involves the simulation of sophisticated cyberattacks by skilled professionals, often referred to as red teams, who emulate the tactics, techniques and procedures (TTPs) of real-world adversaries. Unlike traditional security assessments that focus on identifying vulnerabilities and patching them, red teaming goes beyond by examining the effectiveness of an organization’s people, processes and technology in detecting and responding to cyber threats. 

Red teaming simulates a real-world attack that happens without boundaries and where the motive is not only to identify vulnerabilities but also to exploit them (or create a POC) and showcase the worst-case scenarios possible. Red team assessments encompass the activities of phishing, DDoS, session takeovers, client-side attacks, social engineering and more, which can often be missing in black and white box testing.  

Talking about the tools used in red teaming by different organizations across the globe, there are plenty of red teaming tools, and interestingly, most of the good ones are open source. Some tools help in lateral movement, mapping the directories/domains, privilege escalation, enumeration, or for any of the 2,000 possible attacks in red teams! 

Amalgamated with the capabilities of AI, we can expect the tools to bypass antimalware scan interface (AMSI) and antivirus tools with greater ease, owing to the capabilities to create custom bypass scripts. We can also expect tools with even stealthier approaches since detection simulation can be tasked to AI to continuously improve the ninja factor! Also, changing script signatures, juggling function names, smuggling data out of machines and tampering with logs creatively are some jobs that we can reliably delegate to AI.  

4. Reverse engineering tools  

In the realm of software, reverse engineering typically involves disassembling or decompiling executable code to extract information about its source code, data structures and algorithms. This practice is employed for various purposes, including understanding legacy systems, interoperability between different software components, identifying vulnerabilities and detecting malicious behavior. 

Reverse engineering has always been a neglected side for developers and a Swiss army knife for attackers. Uber was a victim of it in 2016 when their developers left access keys hidden in their code, which were then found by hackers after reverse engineering their mobile application. It resulted in a major breach disclosing the driver and rider details of 57 million users

Reverse engineering tools are used in the identification of application behavior to create mods, malware detection, feature enhancements and exploitations like overflows. Most of the commonly used tools in reverse engineering are free yet basic in terms of functionality and assistance. The challenge remains that every application has a different architecture and codebase and that no static universal rule can be created for hacker assistance.  

AI can be a game-changer by assisting in pattern detection to ascertain malware, applying breakpoints using best guesses on the behavior of the application, finding overflows and performing overflow simulation. AI-powered static and dynamic analysis tools can automatically identify functions, variables and control flow within binary code, helping reverse engineers to understand the behavior and structure of software applications more rapidly. By harnessing the power of AI, reverse engineers can accelerate the discovery process, uncover hidden insights and ultimately enhance their ability to understand and reconstruct complex systems more effectively.  

AI: The ultimate game-changer for security 

Artificial intelligence is a game-changer that can help increase the robustness of cybersecurity and enhance detection and response capabilities to a high level. These advancements are going to reduce the time taken by individuals in manual analysis and help in automating many functional processes. 

However, human interactions with such tools will remain a must, since logical errors, business-critical vulnerabilities, false positives, enhancing the models and reviewing each vulnerability will still require intelligent minds. 

Anurag Goyal is the head of cybersecurity for RedDoorz, a Singapore-based, technology-driven hotel management and booking platform with more than 3,20 properties in Southeast Asia. He is also a dedicated cybersecurity researcher and globally certified ethical hacker, boasting extensive experience fortifying the security posture of more than 100 prominent organizations worldwide, including such esteemed entities as the United Nations (UN), World Bank, Uber, Zomato, Dream11, FoodPanda, Ernst and Young (EY), HDFC Bank, Axis Bank, ITC Hotels, OYO and Lenskart, among others.



Source link