AI in Cybersecurity


Separating Hype from Hyperbole

By Avkash Kathiriya

“Artificial Intelligence in cybersecurity is like a supercharged virtual fortress armed with a gazillion laser-focused cyber warriors, ready to annihilate any threat with the force of a million supernovas.” While I pulled that quote from an online hyperbole generator, the reality is cybersecurity pros are inundated with equally exaggerated AI claims with stunning regularity. It’s easy to get wrapped up in the hype cycle; AI isn’t new but has recently made notable advances. Across the cybersecurity industry, you can practically feel the vacillation between rapid adoption and unyielding hesitation. So when it comes to AI and security, do we have a good path forward or does it lead us off a cliff?

Overcoming Obstacles to AI Adoption

Security pros are justifiably tentative about artificial intelligence. Hollywood portrays AI risks as sentient robots who aim to take over the world; the real-world danger is less fantastic but can harm an organization’s cybersecurity posture. AI systems, particularly without adequate training data, can generate false positives and false negatives, leading to wasted resources, missed attacks, and potentially severe breaches. Because training AI models requires vast amounts of data, there are legitimate privacy concerns, particularly about how sensitive data is used, stored, and processed. AI’s reliability and trustworthiness remains in question for many. And with hype surrounding AI, often touting it as a security panacea, relying too heavily on tech and not enough on human expertise.

Although the market’s AI enthusiasm can lead to exaggeration, there are pragmatic approaches to integrating AI technologies into a cybersecurity program – strategies that keep humans in control. A number of security challenges simply cannot be solved at scale with humans alone. There is too much information to ingest, analyze, correlate, and prioritize. AI can help analysts with the tedium they must deal with on a daily basis. The overpromises of legacy AI models contribute to the ongoing skepticism. However, advanced AI’s potential does not lie in adding another tool to your tech stack; the value it offers enables you to connect the dots, getting the most out of your team and the tools you already have.

Adopting AI with Intention, not Impulse

Enterprises don’t need fewer security people. Their security people need fewer repetitive, monotonous tasks; they need less noise and more signal. “I went into cybersecurity to drown in log reviews and false positive analysis,” said no one ever. AI automation can reduce human intervention in the drudgery, allowing them to make context-rich, nuanced decisions – and making them faster.

AI automation can address the overwhelming information security analysts encounter, and upon closer examination, it can help with a variety of repetitive tasks, getting your team out of the weeds. Here are just a handful of ways security teams can adopt AI with intention, in an effort to improve both efficiency and effectiveness:

  1. Efficient Rule Drafting: The arduous task of drafting detection rules has traditionally consumed significant human bandwidth and involved lots of guesswork. AI bots, with their ability to quickly analyze vast datasets, offer a pragmatic alternative. They can not only accelerate the drafting process but also refine detection criteria with machine precision.
  2. Seamless Integration and Orchestration: Many of today’s security tools integrate with hundreds of applications, increasing functionality but not necessarily simplicity. But the challenge arises when we consider how frequently the integration needs change. Here, AI bots play a pivotal role by automating the bulk of integration processes, ensuring that cybersecurity infrastructures remain cohesive even as they evolve.
  3. Addressing the Overloaded Analysts: Amid the chorus of cybersecurity challenges, information overload facing analysts often takes center stage. Deciphering genuine threats from the flood of alerts is daunting. AI can help sift through this digital noise, highlighting legit threats, and when orchestrated effectively, enables collaboration across a security function. This helps organizations more quickly act on context-rich insights and move from a reactive to proactive security posture.
  4. The Meta Automation: The concept of ‘automating automation’ might sound abstract, but in a cybersecurity context, it’s a reality. AI is at the forefront of constructing automation playbooks, a move that multiplies response speed and adaptability. This can dramatically reduce the time required to build, test, and maintain effective playbooks.
  5. Effortless Documentation: Crafting exhaustive documentation and reports, a task many professionals find tedious, can be addressed using AI. By automating this process, AI ensures consistency, thoroughness, and timeliness in reporting, alleviating one more monotonous burden from the human workforce.

Not All the AI – The Right AI

AI is an overused buzzword, often accompanied by hyperbole and an inflated sense of urgency. Coupled with the baggage stemming from first generation AI tools, it’s no surprise that there is tremendous uncertainty about how and when to use it. To get beyond the blustering, we must focus our attention on the practical use cases that do the heavy computational lifting so that security teams can focus on higher-impact projects that better secure the organization.

About the Author

Avkash Kathiriya is the Senior Vice President of Research and Innovation for Cyware with substantial experience in the information security domain, product management, and business strategy. He’s a popular speaker on cybersecurity strategy and trends, and has served as an advisory board for multiple security startups.

He can be reached at https://cyware.com/company/contact-us and on X (formerly Twitter) at @CywareCo



Source link