- "기밀 VM의 빈틈을 메운다" 마이크로소프트의 오픈소스 파라바이저 '오픈HCL'란?
- The best early Black Friday AirPods deals: Shop early deals
- The 19 best Black Friday headphone deals 2024: Early sales live now
- I tested the iPad Mini 7 for a week, and its the ultraportable tablet to beat at $100 off
- The best Black Friday deals 2024: Early sales live now
IT Leaders Are Fifty-Fifty on Using GenAI For Cybersecurity
There remains a lack of consensus among European IT leaders about the value of generative AI (GenAI) in a cybersecurity context, according to a new study from Corelight.
The network detection and response (NDR) specialist polled 300 IT decision makers (ITDMs) in the UK, France and Germany to produce the report, Generative AI in Security: Empowering or Divisive?
It found that the technology is a source of optimism and concern, in almost equal measure.
Some 46% of respondents claimed that they’re proactively looking at how to incorporate the technology in their cybersecurity approaches. However, a similar share (44%) said that concerns over data exposure and enterprise silos make it difficult or impossible to use GenAI in cybersecurity.
A further 37% argued that GenAI is “not safe to use in cybersecurity.”
Read more on GenAI security risks: Fifth of CISOs Admit Staff Leaked Data Via GenAI
Security concerns have been shown in various instances, for example Samsung banned the use of ChatGPT in 2023 after staff shared private meeting notes and source code with the tool. Any sensitive customer data or IP shared with tools like this could theoretically be accessed by other users – presenting clear security, privacy and compliance risks.
However, these challenges are not insurmountable, as long as the GenAI tool and underlying large language model (LLM) are correctly architected with privacy in mind, Corelight claimed.
The vendor’s own commercial NDR offering establishes a “functional firewall” between customer data and a GenAI feature, it claimed.
“GenAI’s adoption is hindered by concerns over data confidentiality and model accuracy. As models improve in overall reasoning capacity and cybersecurity knowledge, and as more LLM deployments include structural privacy protections, GenAI is set to become integral to security operations,” argued Ignacio Arnoldo, director of data science at Corelight.
Half (50%) of the ITDMs polled by the vendor claimed GenAI’s biggest impact on cybersecurity could be to provide alert context and analysis for SecOps teams. Other use cases they cited include:
- Maintaining compliance policies (41%)
- Recommending best practices on domain-specific languages like identity and access management policy (36%)
- Unstructured vulnerability information (35%)
- Remediation guidance (35%)
- Unstructured network connection and process information (32%)