- Nile unwraps NaaS security features for enterprise customers
- Even Nvidia's CEO is obsessed with Google's NotebookLM AI tool
- Get these premium Sony Bravia home theater speakers for $500 off during Black Friday
- The best Black Friday soundbar and speaker deals: Save on Bose, Sonos, Beats, and more
- One of the best pool-cleaning robots I've tested is $450 off for Prime Day
Cybersecurity Chiefs Navigate AI Risks and Potential Rewards
Security chiefs say the benefits of artificial intelligence are clear, but that the promises and risks of early generative AI are overblown.
Generative AI platforms such as OpenAI’s ChatGPT have gained attention for their ability to answer conversational questions, write essays and perform other tasks in humanlike ways.
Security vendors are touting the benefits of the technology, saying it can augment human analysts by analyzing and distilling data from wildly different sources into a digestible report. Google released a security-focused generative AI product in April, joining cyber technology providers including SecurityScorecard and
Some chief information security officers see the technology’s potential but are unconvinced that in its current form it does anything new. Machine-learning technology has been in place for years in areas such as market surveillance units of stock exchanges, performing similar data analysis functions, and in cybersecurity teams at large companies such as retailer Walmart.
They also don’t trust it.
“At present, we’re basically looking at every result and trying to understand if we can trust not just the work that went into the result, in terms of the sources that it was trained from, but then the result itself,” said Justin Shattuck, CISO at insurer Resilience. Generative AI systems have been known to give inaccurate or misleading results, sometimes from prompts that are too vague but also from poor data sources. The limitations of the technology mean that it can run into trouble on relatively simple queries, such as solutions to mathematical problems expressed conversationally.
Shattuck said his team has experimented with generative AI to analyze the security information generated by its systems. AI can identify data points of interest that may be missed by human analysts reading reams of alerts.
“We found that we can trust it for that type of workload,” he said.
Government officials say they are still assessing the impact that AI variants such as generative apps could have in the future before they issue recommendations. John Katko, a former Congressman for New York’s 24th district, and the ranking member of the House Homeland Security Committee until earlier this year, said that the true potential of the technology has yet to be realized, given the speed of development.
“Where is AI going to be in six months, and how is that going to change things? Look at how much it has changed in the last three months,” he said, referring to its widespread adoption by software providers.
For Lucia Milică Stacy, global resident CISO at cybersecurity firm Proofpoint, the speed of development and public fascination with the technology have led to the rash of generative AI deployments by technology providers. Sometimes this stems from a commercial imperative but also from worries that if they don’t use it, hackers will, she said.
“Our job as security leaders is to manage that risk, and every time there’s new tech, there’s a new opportunity for that threat actor to leverage that to get into my environment,” said Milică Stacy.
There is little doubt that generative AI is a boon to phishing attackers, who can otherwise be tripped up by poorly worded scam emails. ChatGPT can write grammatically correct copy for them. Cybersecurity company
said in an April report that it had observed a 135% rise in spam emails to clients between January and February with markedly improved English-language grammar and syntax. The company suggested that this change was due to hackers using generative AI applications to craft their campaigns.
Companies including electronics giants Samsung Electronics and
lender
and telecommunications company
have barred or restricted employee use of ChatGPT and similar programs. The measures were introduced over fears that employees might paste sensitive information into these tools, which could then leak or send trade secrets back to the AI model to be trained on.
Concerns should be manageable through existing data-protection procedures and a few new controls, said Supro Ghose, CISO at
a regional bank in Virginia, Washington, D.C., and Maryland. The new crop of AI tools doesn’t necessarily bring risks that good employee training and data classification can’t counter, he said.
Cybersecurity teams should scan company networks to find where employees are using ChatGPT and other free AI utilities, Ghose said. “Awareness is the first thing you should have,” he said. Network detection and response providers, including ExtraHop Networks, are adding such visibility features to their tools.
Companies will have to both train employees not to use sensitive or proprietary information with generative AI tools and install digital filters and controls to stop the tools from taking in such data, he said. “The reality is the risk is very, very similar to sending an email out,” he said.
—Kim S. Nash contributed to this article.
Write to James Rundle at james.rundle@wsj.com
Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8