- Perimeter Security Is at the Forefront of Industry 4.0 Revolution
- Black Friday sales just slashed the Apple Watch SE (2nd Gen) to its lowest price ever
- Get an Apple Watch Series 10 for $70 off for the first time ahead of Black Friday
- The 15 best Black Friday Target deals 2024
- This fantastic 2-in-1 laptop I tested is highly recommended for office workers (and it's on sale)
The Interplay of AI and Cybersecurity: Survey Results
Artificial intelligence (AI) has a long and storied history. Ancient Greeks, for example, told stories of Talos, an enormous automaton that stood guard over Crete’s shores. In the 17th century, Gottfried Leibniz, Thomas Hobbes, and René Descartes explored the possibility that all rational thought could be as systematic as algebra or geometry. By the mid-20th century, British computer scientist Alan Turing was seriously investigating the possibility of “machine intelligence,” and by 1956, “artificial intelligence research” was an established academic discipline.
Fast forward to today, and artificial intelligence is ubiquitous. The launch of ChatGPT in late 2022 marked an enormous leap forward for AI, prompting organizations and individuals worldwide to implement the technology into their lives and work. It also sparked discussion around ethics, forcing the world to consider bias, privacy, and responsible development as they relate to artificial intelligence.
The dawn of the AI era—as we can safely call our post-ChatGPT world—also raised questions for the cybersecurity industry. For example, will AI be a force for good or evil? How can AI improve cybersecurity? And how will AI transform the current threat landscape? These are all questions the Cloud Security Alliance (CSA) has set out to answer.
In November 2023, the CSA distributed an online survey to nearly 2500 cybersecurity experts to get their thoughts on AI to gain a deeper understanding of:
- Current security challenges
- Perceptions of AI in cybersecurity
- Industry familiarity with AI
- Plans for AI use in the industry
- AI impact on staffing and training
Let’s examine some of the key findings from the CSA’s State of AI and Security Survey Report.
Will AI Benefit Attackers or Defenders?
The CSA study reveals that most (63%) of surveyed security professionals are cautiously optimistic about AI, believing it will enhance threat detection and response. However, those same respondents are split on whom AI will benefit more: 34% believe security teams will benefit the most, and 25% believe AI will favor malicious actors. In comparison, 31% think the technology has equal advantages for defenders and attackers.
Are Security Pros Concerned AI Will Replace Them?
Despite concerns that AI could make many jobs obsolete, most cybersecurity professionals believe the technology will empower, not replace, them. Most respondents believe AI will enhance their skills (30%), support their roles (28%), or automate large parts of their tasks and free up time for more advanced work. Only a minimal number (12) of those surveyed fear that AI will replace them entirely. However, more than half of the surveyed security professionals are concerned about possible overreliance on AI, stressing the importance of balancing AI-driven and human-driven security approaches.
How do Executive and Staff AI Perspectives Differ?
Somewhat predictably, C-suite executives are (or at least claim to be) much more familiar with AI than their staff – 51% of C-suite respondents claim to be very familiar with AI, compared to only 11% of staff. The same proportion of C-levels reported having a “clear” understanding of AI, compared to only 14% of staff. Unfortunately, we can only speculate whether these results truly represent a knowledge gap or are merely a particularly concerning example of C-level hubris.
What we do know, however, is that most staff (74%) are confident in their leadership’s knowledge about AI’s security implications – although only 14% of staff said they had a clear understanding of AI, these results stand on shaky foundations. Similarly, 84% of respondents said their executive leadership and boards advocate for AI adoption. It’s clear, however, that executives and boards need to do more to educate their staff on AI to achieve that goal.
Will Organizations Implement AI in 2024?
According to the CSA’s report, at the time of distribution, over half (55%) of organizations planned to implement generative AI in the next year. They planned to explore the following use cases:
- Rule creation – 21%
- Attack simulation – 19%
- Compliance violation monitoring – 19%
- Network detection – 16%
- Reduce false positives – 16%
- Training development and support – 15%
- Anomaly classification – 14%
- Natural language to search – 13%
- Threat summarization – 13%
- Data loss prevention, IP protection – 13%
- User behavior analysis – 11%
- Automated report generation – 10%
- Endpoint detection – 10%
- Event log summarization – 9%
- Forensic analysis – 9%
- Chatbot – 9%
- Incident summarization – 8%
- Configuration drift – 8%
- Recommendations for action/remediation – 8%
- Code analysis – 7%
What are the Main Challenges to AI Implementation?
According to the survey, security professionals believe that a shortage of skilled staff (33%) is the main challenge to implementing AI in cybersecurity. This includes finding new staff with the right skills and upskilling existing staff.
Other challenges include resource allocation (11%), understanding AI risks (10%), and the cost of implementation (85%). Somewhat surprisingly, respondents did not consider traditional barriers such as regulatory and data privacy compliance concerns as foremost challenges.
All in all, cybersecurity professionals are cautiously optimistic about AI. However, their understanding of the technology leaves something to be desired; this, perhaps, reflects an overconfidence and under-preparedness that could prove disastrous. C-level executives must ensure that their understanding of AI is where they believe it to be and work to improve their staff’s knowledge. Although organizations understandably want to implement AI into their business processes, they must address knowledge and skill gaps before they do so.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.