- I switched to a $100 Motorola phone for two weeks, and it impressed me in several ways
- This E Ink reader that almost replaced my Android phone is at an all-time low price
- Windows 11 not running smoothly? 4 things I always check first
- Building Trust into Your Software with Verified Components | Docker
- IBM's new enterprise AI models are more powerful than anything from OpenAI or Google
Can AI and automation properly manage the growing threats to the cybersecurity landscape?
Organizations are turning to automation and artificial intelligence (AI) to cope with a complex and expanding threat landscape. However, if not properly managed, this can have some drawbacks.
In a video interview with ZDNET, Daniel dos Santos, senior director of security research at Forescout’s Vedere Lab, stated that generative AI (gen AI) helps make sense of a lot of data in a more natural way than was previously possible without AI and automation.
Machine learning and AI models are trained to help security tools categorize malware variants and detect anomalies, said ESET CTO Juraj Malcho.
Also: AI anxiety afflicts 90% of consumers and businesses
Malcho expressed the need for manual moderation to further reduce threats by data purging and inputting cleaner datasets to continuously train AI models in an interview with ZDNET.
It helps security teams keep up with the onslaught of data, and the multitude of systems including firewalls, networking monitoring equipment, and identity management systems are collecting and generating data from devices and networks.
All of these, including alerts, become easier to understand and more explainable with gen AI, dos Santos said.
Also: AI is changing cybersecurity and businesses must wake up to the threat
For instance, security tools, can not only raise an alert for a potential malicious attack but also tap natural language processing to explain where a similar pattern may have been identified in previous attacks and what it means when it’s detected in your network, he noted.
“It’s easier for humans to interact with that type of narration than before, where it mainly comprises structured data in large volumes,” he said. Gen AI now summarizes that data into insights that are meaningful and useful to humans sitting behind the screen, dos Santos said.
Malcho added that AI technology enables SOC (security operations center) engineers to prioritize and focus on more important issues.
Also: 1 in 4 people have experienced identity fraud – and most of them blame AI
However, will growing dependence on automation result in humans becoming inexperienced in recognizing anomalies?
Dos Santos acknowledged this as a valid concern but noted that the volume of attacks would only continue to grow, alongside data and devices to protect. “We’re going to need some kind of automation to manage this and the industry already is moving toward that,” he said.
“However, you will always need humans in the loop to make the decisions and determine if they should respond to [an alert].”
Also: The biggest challenge with increased cybersecurity attacks, according to analysts
He added that it would be unrealistic to expect security teams to keep expanding to 50 or 100 to keep up. “There’s a limit to how organizations staff their SOCs, so there’s a need to turn to AI and gen AI tools for help,” he said.
He stressed that human instinct and skilled security professionals will always be needed in SOCs to ensure the tools are working as intended.
Furthermore, with cybersecurity attacks and data increasing in volume, there is always room for human professionals to expand their knowledge to better manage this threat landscape, he said.
Also: Businesses’ cloud security fails are ‘concerning’ – as AI threats accelerate
Malcho concurred, adding that it should inspire lower-skilled executives to gain new qualifications to value-add and make better decisions — not simply blindly consume signals generated by AI and automation tools.
SOC engineers still have to look at a combination of different signals to connect the dots and see the whole picture, he noted.
“You don’t need to know how the malware works or what variant is generated. What you need is to understand how the bad actors behave,” he said.
Also: Can synthetic data solve AI’s privacy concerns? This company is betting on it
Increased automation, though, may run the risk of misconfigured codes or security patches being deployed and bringing down critical systems, as was the case of the CrowdStrike outage in July.
The global outage occurred after CrowdStrike pushed a buggy “sensor configuration update” to Windows systems running its Falcon Sensor software. While not itself a kernel driver, the update communicates with other components in the Falcon sensor that run in the same space as the Windows kernel, or the most privileged level on a Windows PC, where they interact directly with memory and hardware, according to ESET.
CrowdStrike said a “logic error” in the code caused Windows systems to crash within seconds after they were booted up, displaying the “blue screen of death.” Microsoft had estimated that the update affected 8.5 million Windows devices.
Also: Fidelity breach exposed the personal data of 77,000 customers
Ultimately, underscoring the need for organizations, however large they are, to test their infrastructure and have multiple failsafes in place, said ESET’s global security advisor Jake Moore in a commentary following the CrowdStrike outage. He noted that upgrades and systems maintenance can unintentionally include small errors that have widespread consequences, as shown in the CrowdStrike incident.
Moore highlighted the importance of “diversity” in the use of large-scale IT infrastructures, including operating systems and cybersecurity tools. “Where diversity is low, a single technical incident — not to mention a security issue — can lead to global-scale outages with subsequent knock-on effects,” he said.
Enforcing proper procedures still matters in automation
Simply put, the right automation processes probably were not implemented, Malcho said.
Codes, including patches, need to be reviewed after they are written and tested internally. They should be sandboxed and segmented from the wider network to further ensure they are safe to deploy, he said. Rollout then should be done gradually, he added.
Dos Santos echoed the need for software vendors to have the “strictest testing” and ensure issues would not surface. He noted, though, that no system is fool-proof and things can slip through the cracks.
Also: AI can now solve reCAPTCHA tests as accurately as you can
The CrowdStrike episode should further highlight the need for organizations deploying updates to do so in a more controlled way, he said. For instance, patches can be rolled out in subsets, and not to all systems at once — even if the security patch is tagged as critical.
“You need processes to ensure updates are done in a testable way. Start small and scale when tested [is verified],” he added.
Pointing to the airline industry as an example, incidents are investigated seriously so missteps can be identified and avoided in the future. There should be similar policies in place for the cybersecurity industry, where everyone should work on the assumption that safety is paramount, dos Santos said.
Also: Internet Archive breach compromises 31 million accounts
He urged for more responsibility and liability — organizations that release products that are clearly unsafe and do not adhere to the right security standards should be duly punished. Governments will have to figure out how this needs to be done, he noted.
“There needs to be more liability. We can’t just accept terms of licenses that let these organizations say they aren’t liable for anything,” he said. There also should be user awareness on how to improve their basic security posture, such as changing default passwords on devices, he added.
Done right, AI and automation are necessary tools that will enable cybersecurity teams to manage what would otherwise be an impossible threat environment to handle, Malcho said.
Also: You should protect your Windows PC data with strong encryption – here’s how
And if they are not already using these tools, cybercriminals are one step ahead.
Threat actors already using gen AI
In a report released this month, OpenAI confirmed that threat actors are using ChatGPT in their work. Since the start of 2024, the gen AI developer stopped at least 20 operations worldwide that attempted to use its models. These ranged from debugging malware to generating content for fake social media personas.
“These cases allow us to begin identifying the most common ways in which threat actors use AI to attempt to increase their efficiency or productivity,” OpenAI said. These malicious hackers often used OpenAI models to perform tasks in a “specific, intermediate phase of activity” after acquiring basic tools, such as internet access and social media accounts, but before deploying “finished” products, such as social media posts or malware via various channels.
For example, a threat actor dubbed “STORM-0817” used ChatGPT models to debug their code, while a covert operation OpenAI coined “A2Z” used its models to generate biographies for social media accounts.
Also: ChatGPT’s most lauded capability also brings big risk to businesses
OpenAI added that it disrupted a covert Iranian operation in late August that generated social media comments and long-form articles about the US election as well as the conflict in Gaza, and Western policies toward Israel.
Companies are noticing the use of AI in cyberattacks, according to a global study released this month by Keeper Security, which polled more than 800 IT and security executives.
Some 84% said AI-enabled tools have made phishing and smishing attacks more difficult to detect, prompting 81% to implement employee policies around the use of AI.
Another 51% deem AI-powered attacks the most serious threat facing their organization, with 35% admitting they are least prepared to combat such threats, compared to other types of cyber attacks.
Also: Businesses still ready to invest in Gen AI, with risk management a top priority
In response, 51% said they have incorporated data encryption into their security strategies, while 45% are looking to improve their training programs to guide employees, for instance, in identifying and responding to AI-powered threats. Another 41% are investing in advanced threat detection systems.
Findings from a September 2024 report from Sophos revealed concerns about AI-enabled security threats, with 73% pointing to AI-augmented cybersecurity attacks as the online threat they worry most about. This figure was highest in India, where almost 90% named AI-powered attacks as their top concern, followed by 85% in the Philippines and 78% in Singapore, according to the study, which based its research on 900 companies across six Asia-Pacific markets, including Australia, Japan, and Malaysia.
While 45% believe they have the necessary skills to deal with AI threats, 50% have plans to invest more in third-party managed security services. Among those planning to increase their spending on such managed services, 20% said their investments will grow by more than 10%, while the remaining point to an increase of between 1% and 10%.
Also: OpenAI sees new Singapore office supporting its fast growth in the region
Some 22% believe they have a comprehensive AI and automation strategy in place, with 72% noting they have an employee tasked to lead their AI strategy and efforts.
To plug shortages in AI skills, 45% said they will outsource to partners, while 49% plan to train and develop in-house skills and will need partners to support training and education.
On average, 20% currently use a single vendor for their cybersecurity needs, while 29% use two and 23% use three. Some 10% use tools from at least five security vendors.
Also: Transparency is sorely lacking amid growing AI interest
Underperforming tools, though, and a security breach or major outage involving third-party service providers are the top reasons the organizations will consider a change in cybersecurity vendor or strategy.
In addition, 59% will “definitely” or “probably” not appoint a third-party vendor that suffered a security incident or breach. Some 81% will consider vendors that had been breached if there are additional clauses related to performance and specific level agreements.