Navigating AI and Cybersecurity: Insights from the World Economic Forum (WEF)


Cybersecurity has always been a complex field. Its adversarial nature means the margins between failure and success are much finer than in other sectors. As technology evolves, those margins get even finer, with attackers and defenders scrambling to exploit them and gain a competitive edge. This is especially true for AI.

In February, the World Economic Forum (WEF) published an article entitled “AI and cybersecurity: How to navigate the risks and opportunities,” highlighting AI’s existing and potential impacts on cybersecurity. The bottom line? AI benefits both the good and bad guys, so it’s essential the good guys do everything they can to embrace it.

This article will examine and expand on some of the WEF’s key points.

Advantages and opportunities for attackers

Before diving into how AI can enhance cybersecurity, it’s worth exploring some of the opportunities it grants cybercriminals. After all, it would be challenging to combat threats if we don’t truly understand what they are.

Of all the issues put forward, deepfakes are perhaps the most concerning. As the WEF says, more than 4 billion people are eligible to go to the ballot box this year, and deepfakes will undoubtedly play a role. In the UK alone, both the Prime Minister and the Leader of the Opposition have fallen afoul of AI-generated fake content. One might be tempted to assume that modern voters can identify a digitally manipulated video. Still, we only need to look to the WEF’s example of a deepfake fooling a Hong Kong finance worker to the tune of $25 million to realise this isn’t necessarily the case.

Sticking with the social engineering theme, AI has made phishing scams easier to create and harder to detect. Before the launch of ChatGPT in November 2022, it felt as if we were on the cusp of tackling phishing scams; obviously, they weren’t going away, but awareness of them was improving by the day, and people increasingly knew how to identify them. Spelling mistakes, poor grammar, and clunky English were all tell-tale signs of a scam. However, scammers today, with large language models (LLMs) at their fingertips, can craft and distribute phishing scams at scale and without any of the mistakes that previously would have given them away.

Advantages and opportunities for defenders

But it’s not all doom and gloom; AI also has enormous benefits for cybersecurity professionals. The WEF gives a broad overview of how the cybersecurity sector can take advantage of AI, but it’s worth looking a little deeper at some of those use cases.

AI frees up time for security teams. By automating mundane, repetitive tasks, security teams can spend more time and energy innovating and improving their enterprise environments, protecting themselves from more advanced threats.

AI is also an invaluable resource for speeding up detection and response times. AI tools continuously monitor network traffic, user behaviour, and system logs for anomalies, flagging any issues to security teams as they arise. This means that security teams can proactively prevent attacks instead of merely reacting to an incident after it has taken place.

According to the ISC2 Cybersecurity Workforce Study, the cybersecurity sector is currently short 4 million workers. This is an alarming figure, but one that AI can help bring down. While the WEF argues that AI can be used to educate people about cybersecurity and train the next generation of professionals, both of which are valid points, this overlooks the fact that AI could bring down the need for cybersecurity workers by automating much of the work they need to do.

AI regulation and collaboration

While AI regulation is undoubtedly important for, according to the WEF, the “development, use and implementation of AI technologies in a way that will benefit societies while limiting the harm they may cause,” it is perhaps more important that government, industry, academics, and civil society are singing from the same hymn sheet. Contradictory motivations and priorities could prove disastrous.

As such, the WEF’s AI Governance Alliance, launched in April 2023, brings together those groups for a common goal: championing responsible global design and the release of transparent and inclusive AI systems. In a world where competition reigns supreme, initiatives such as this are vital for ensuring we keep safety in mind when developing AI systems.

Some recent examples of AI regulation include:

  • The EU AI Act
  • The UN advisory body on AI governance
  • The UK AI whitepaper
  • The US Executive Order on AI Safety

But, while well-intentioned, many of these have received backlash. Most notably, the EU AI Act, which the EU parliament adopted in March, has seen significant criticism from industry for stifling innovation. This drives home the critical takeaway from the WEF article: collaboration is vital if we want to develop AI safely. As the WEF has attempted to do with the AI Governance Alliance, it’s important that all groups with a vested interest in AI – especially cybersecurity professionals – are involved in the regulatory process. It’s uncharted territory, and we’ll all be safer together.


Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.



Source link