47% of Organizations Have Dealt With Deepfake Attacks

According to a recent iProov report, the risk of deepfakes is rising with almost half of organizations (47%) having encountered a deepfake and three-quarters of them (70%) believing deepfake attacks which are created using generative AI tools, will have a high impact on their organizations. Sixty-eight percent believe that while it’s impactful at creating cybersecurity threats, more (84%) find it’s instrumental in protecting against them. 

While organizations recognize the increased efficiencies that AI can bring, these benefits are also enjoyed by threat technology developers and bad actors. Almost three quarters (73%) of organizations are implementing solutions to address the deepfake threat but confidence is low with the study identifying an overriding concern that not enough is being done by organizations to combat them. More than two-thirds (62%) worry their organization isn’t taking the threat of deepfakes seriously enough. 

The study also reveals some rather nuanced perceptions of deepfakes on the global stage. APAC (51%), European (53%), and LATAM (53%) organizations are significantly more likely than North American (34%) organizations to say they have encountered a deepfake. APAC (81%), European (72%), and North American (71%) organizations are significantly more likely than LATAM organizations (54%) to believe deepfake attacks will have an impact on their organization. 

Deepfakes are now tied for third place amongst the most prevalent concerns for survey respondents with the following order: password breaches (64%), ransomware (63%), phishing/social engineering attacks (61%) and deepfakes (61%). 

There are many different types of deepfakes but they all have one common denominator: they are created using generative-AI tools. Organizations recognize that generative AI is innovative, secure, reliable, and helps them to solve problems. They view it as more ethical than unethical and believe it will have a positive impact on the future. And they’re taking action: just 17% have failed to increase their budget in programs that encompass the risk of AI. Additionally, most have introduced policies on the use of new AI tools. 

Read the report.



Source link

Leave a Comment