Proofpoint Previews Generative AI Tools to Thwart Social Engineering


At the Proofpoint Protect 2023 conference, Proofpoint today revealed it is leveraging a BERT large language model (LLM) originally created by Google to thwart social engineering attacks using generative artificial intelligence (AI).

The LLM is being added to the company’s email analysis and response platform, dubbed CLEAR.

At the same time, Proofpoint previewed Proofpoint Security Assistant to provide a natural language interface through which queries can be launched. The Security Assistant will be available in the fourth quarter as part of the company’s Sigma Information Protection data loss prevention platform. The interface promises to make it simpler for cybersecurity teams to surface issues and trends, such as possible attack paths, via an interface that will be extended to include support for the company’s Aegis and Identity Threat Defense platforms next year.

Proofpoint already makes use of machine learning algorithms to combat cybersecurity threats, but like most providers of cybersecurity platforms, is moving to incorporate generative AI capabilities to better thwart cyberattacks. Examples of cyberattacks include, for example, business email fraud (BEC), ransomware, weaponized URLs and multifactor authentication (MFA) bypass for credential phishing.

Ryan Witt, vice president for industry solutions for Proofpoint, said it’s clear that a race for AI superiority is underway as cybercriminals also are taking advantage of generative AI to launch cyberattacks. Without the aid of AI, it will be extremely difficult to detect those attacks. Most of those attacks will be based on the same type of social engineering tactics and techniques that cybercriminals have successfully relied on in the past, but in addition to being better crafted with the help of AI, they will also be increasing in volume as the time to create and launch them dwindles to minutes.

Cybersecurity teams, in turn, will only have a few minutes to respond to any given breach, so thwarting as many of these attacks as possible within seconds of detection is more critical than ever, noted Witt.

In fact, a global survey of 659 board members at organizations with 5,000 or more employees published by Proofpoint today found 59% of respondents believe generative AI is a security risk for their organization.

Overall, the survey finds nearly three-quarters of respondents (73%) feel their organization is at risk for a material cyberattack, with 53% reporting they feel their organization is unprepared to cope with a targeted attack.

Board members have those concerns even though 73% said cybersecurity is a priority, with 72% noting the board clearly understands the cybersecurity risks they face. More than half said they interact with security leaders regularly, with 65% claiming they see eye-to-eye with their CISO. A total of 70% said the organization has adequately invested in cybersecurity, while 84% added their cybersecurity budget will increase over the next 12 months.

Nevertheless, 53% said their organization is unprepared to cope with a cyberattack in the next 12 months, with malware ranked as their top concern (40%), followed by insider threats (36%) and cloud account compromise (36%). Well over three-quarters (37%) said their organization would benefit from a bigger cybersecurity budget, while 35% would like to see more cybersecurity resources made available along with better threat intelligence.

Nearly three-quarters (72%) also expressed concern about personal liability after a cybersecurity incident at their own organization. Much of that concern is driven by new rules being imposed by the Securities and Exchange Commission (SEC), noted Witt.

Regardless of the motivation, the rules of the cybersecurity game have fundamentally changed. The only thing to be seen is how well organizations are prepared the first time they encounter a cyberattack that leverages AI to wreak havoc.



Source link