- Nile unwraps NaaS security features for enterprise customers
- Get these premium Sony Bravia home theater speakers for $500 off during Black Friday
- The best Black Friday soundbar and speaker deals: Save on Bose, Sonos, Beats, and more
- One of the best pool-cleaning robots I've tested is $450 off for Prime Day
- Apple's M2 MacBook Air is on sale for $749 for Black Friday
Emerging A.I. Threats Require New Types of Cybersecurity Skills
Since the release of OpenAI’s ChatGPT large language model chatbot in November 2022, the corporate world has focused on the ability of generative artificial intelligence (A.I.) tools to automate and streamline various business functions. A report released by Goldman Sachs earlier this year finds that two-thirds of U.S. and European jobs are exposed to some degree of A.I. automation. What does the emergence of A.I. mean for cybersecurity?
Along with the business and cost-savings potential associated with generative A.I. and large language models (LLM), there are concerns about emerging threats that the technology can expose organizations to, especially as cybercriminals and malicious actors adopt these chatbots and tools.
Two recent reports show how organizations and their senior leadership are growing more concerned about these generative A.I. tools and how threat actors might use them. This, in turn, is shaping how companies spend their money on cybersecurity protections, as well as what types of skills are needed by tech pros who must address these issues.
In September, security firm Proofpoint released its Cybersecurity: The 2023 Board Perspective report, which surveyed more than 650 board members at organizations with 5,000 or more employees in the U.S., Canada, Europe and other countries and regions. That report found that nearly six in 10 board members (59 percent) report that ChatGPT and similar generative A.I. tools are seen as a security risk to their business.
The same survey also noted that 37 percent of board directors report that their organization’s cybersecurity would benefit from a bigger budget.
At about the same time, cybersecurity firm Darktrace issued a report finding that, between May and July, phishing emails that mimic senior executives decreased by 11 percent, but that account takeover attempts jumped 52 percent and impersonation of the internal IT team increased by 19 percent. One reason for this pattern is that cybercriminals have adopted generative A.I. techniques and tools to refine their own techniques.
“The changes suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks,” according to the Darktrace blog post. “While it’s common for attackers to pivot and adjust their techniques as efficacy declines, generative AI—particularly deepfakes—has the potential to disrupt this pattern in favor of attackers.”
These reports point to a shift in how organizations are responding to generative A.I. On one hand, these are useful tools to help automate business functions. On the other, the technology is ripe for manipulation. It also means that security and tech pros need to understand how the landscape is changing and new approaches to these problems are needed, said Nicole Carignan, vice president of strategic cyber A.I. at Darktrace.
“The volume and sophistication of threats has grown exponentially in recent years and is poised to increase,” Carignan told Dice. “This makes it difficult for human security teams to monitor, detect and react to every threat or attempted attack. Modern systems are also complex and thousands of micro-decisions must be made daily to match an attacker’s spontaneous and erratic behavior to spot, prioritize and contain threats.”
A.I. Is Poised to Change Cybersecurity
While organizations are expected to spend additional budget dollars on generative A.I. tools and services, the technology should also make leaders rethink how they spend their security dollars to address threats as advisories adopt A.I., noted Brian Reed, senior director for cybersecurity strategy at Proofpoint.
“The board perspective report does show that 84 percent of boards envision spending more on security over the next 12 months,” Reed told Dice. “Ideally, these organizations would look beyond the initial market hype of generative A.I. and security chatbots and instead invest in platforms and analytics that can help their organization provide a real-time view of what is actually going on within their environment from a cybersecurity perspective.”
For some organizations, the way to counter generative A.I. threats is to invest in additional A.I. tools as a way to automate processes to overcome the well-documented skills shortage within cybersecurity.
“A.I. can be applied to incident readiness and response, enabling teams to safely run live simulations of real-world cyberattacks—be it data theft or ransomware—on their own assets and within their own environments,” Carignan said. “By providing teams with the ability to practice and train on real-world attacks, A.I. can help significantly strengthen the readiness of existing cyber workers with incident simulations, making them more effective and efficient during a response. This is just one example of how A.I. is also helping to address the cyber skills shortage.”
Industry experts such as Shawn Surber, senior director of technical account management at Tanium, are more skeptical of how well generative A.I. can code or create phishing emails compared to actual humans. The technology, however, is lowering the barrier to entry for attackers and now is the time to invest in tools and processes that help the organization secure vulnerabilities.
“If an organization can see and manage everything that’s on its network or connecting into it, then they’ll gain far more defensive advantage from making sure those devices are patched and properly configured than they will from snatching up shiny new tools to combat over-hyped new threats that haven’t even proven themselves yet,” Surber told Dice.
Revamping Skills to Address A.I.
As organizations adjust to a world where generative A.I. and other automation tools become more accepted and commonplace, tech pros will need to adjust their own skill sets to address these concerns as well as how they use the technology to augment their jobs.
Tanium’s Surber sees a need for more security awareness training within entire organizations to recognize the threats that stem from cybercriminals using generative A.I. At the same time, the amount of data that A.I. can collect and sort will push tech and security pros as well as developers to think about new approaches.
“Just like when internet search engines revolutionized the practice of quickly and easily gaining access to information, A.I. will provide the ability to quickly distill and summarize large data sets,” Surber added. “Employees will need to develop skills in interpreting the data returned and quickly discerning errors or misinformation. An A.I. isn’t yet capable of discerning fact from fiction so human critical thinking and discretion will be critical in avoiding the insertion of erroneous or malicious data into the business.”
Tech professionals will also need to explain not only how A.I. works, but also what the tools are telling the organization about the threats and what needs to be done to mitigate risks. This is where soft skills come into play, said Pyry Åvist, co-founder and CTO at Hoxhunt, which is based in Helsinki.
“Ironically, so-called soft skills like communications, change management and training are becoming more important than ever now in the age of A.I.-generated phishing attacks. We know for certain that a good security behavior change program will protect people and their companies from AI-enabled phishing attacks,” Avist told Dice.
For organizations, this means dropping older security awareness training models and adopting behavior and culture change program that focuses on speedy threat detection and reporting.
“This equips employees with the skills and tools to report even the sophisticated zero-day phishing attacks that slip past email filters,” Avist added. “People are the eyes and ears of the security stack, and will always be your best bet at catching the worst phishing attacks.”
Zane Bond, head of product at Keeper Security, also believes that tech pros will need the ability to explain how generative A.I. works to the larger organization. One limitation of neural networks and LLMs in security is that the system will come up with a believable and probably accurate assessment, but it will be unable to explain how it came to that conclusion.
“This can be a great thread to pull on and investigate, but it’s risky to make important business-impacting decisions based on this information alone,” Bond told Dice. “The implementation of A.I.-powered cybersecurity tools requires a comprehensive strategy that includes other technologies to boost limitations as well as human expertise to provide a layered defense against evolving threats.”