AI anxiety afflicts 90% of consumers and businesses – see what worries them most


EThamPhoto/Getty Images

Consumers are anxious about how businesses are keeping their data secured, especially amid the rise of artificial intelligence (AI), while businesses are anxious about how hackers will exploit their use of AI. 

The technology may be improving business efficiencies, but 50% of cybersecurity leaders in Asia-Pacific expect AI to be used to crack passwords or encryption codes, according to a survey commissioned by Cloudflare, which polled 3,844 cybersecurity decision-makers across 14 markets in Asia-Pacific, including Australia, China, India, Singapore, and South Korea.

Also: AI is changing cybersecurity and businesses must wake up to the threat

Another 47% believe AI will improve phishing and social engineering attacks, while 44% say it will advance DDoS (Distributed Denial of Service) attacks. 

In addition, 40% expect AI to play a role in creating deepfakes and facilitating privacy breaches, the study revealed. 

Furthermore, a host of challenges — including budgetary pressures, the responsibility of securing technology for business transformation, and emerging risks such as AI-powered attacks — have brought more complexity to the business environment. 

Some 86% of cybersecurity leaders surveyed admit that the complexity is making their organization more vulnerable to attacks. 

In fact, 41% say their organization has experienced a data breach over the past 12 months, while 47% say they have encountered at least 10 data breaches.  

Another 76% of those who experienced a data breach in the past year say the frequency of data breaches has increased. Some 58% expect to see a higher number over the next 12 months, with 70% having either experienced a data breach over the past year or believing they will fall victim to one in the year ahead.

Also: Businesses’ cloud security fails are ‘concerning’ – as AI threats accelerate

About 92% say they faced further business impact beyond the immediate data breach as a result of data loss, including 26% that faced regulatory action, 17% that experienced further attacks using the data, and 16% that suffered reputational damage.

Furthermore, 87% of respondents say AI has either contributed to more frequent attacks or enabled cybercriminals to launch more sophisticated attacks. 

However, 83% believe their cybersecurity teams can stay ahead of threat actors tapping AI to power cyberattacks in the future. This optimism about future defense capabilities contrasts with just 28% feeling their organization is highly prepared now if targeted in an AI-enabled data breach. 

Amid the changing threat landscape, 70% say their organizations are tweaking how they operate, with 40% pointing to governance and regulatory compliance as a key area of change. Another 39% say their cybersecurity strategy has been adapted to the change while 36% cite vendor engagement, according to the survey. 

Also: Cybersecurity professionals are turning to AI as more lose control of detection tools

All of the respondents also expect to roll out at least one AI-enabled security tool or measure. Some 45% point to hiring generative AI analysts as a top priority, while 40% highlight the need to invest in threat detection and response systems and 40% say they will be enhancing their SIEM systems. 

Consumers worry when their data is used for AI

Meanwhile, 79% of consumers believe companies are collecting too much of their personal or financial data, according to a separate study released by data security vendor Cohesity. Conducted by Censuswide, the survey polled 6,002 respondents from the UK, US, and Australia.

Another 91% are concerned AI will make it more challenging for companies to secure and manage their data, the study finds. In fact, 83% in Australia believe AI poses a risk to data protection and security, as do 64% in the UK and 72% in the US.

A further 83% in Australia are anxious about unrestricted or un-policed use of AI with their data, alongside 81% in the US and 70% in the UK. The majority of respondents across the board want greater transparency and regulation in this aspect, the study notes. 

Also: AI gold rush makes basic data security hygiene critical

As a bare minimum, 88% in Australia say their permission should be sought out before personal or financial data is fed into AI models. Another 85% in the US and 74% in the UK believe likewise. 

In Australia, 90% want businesses to vet the data security and management practices of third-party providers that have access to customer data, as do 85% in the US and 77% in the UK. In addition, 90% in Australia, 87% in the US, and 79% in the UK want to know whom their data is being shared with. 

More than 90% across the three markets say they may stop transacting with a company if it fell victim to a cybersecurity attack. 

As it is, 75% in the US and 62% in Australia say they have been personally impacted by a cyberattack. 

Also: Safety guidelines provide necessary first layer of data protection in AI gold rush

Consumers believe companies have a lot of catching up to do in the area of data governance and security, said James Blake, Cohesity’s global chief security strategist. 

“The hunger for AI is causing some businesses to skip threat modeling and due diligence on how their data will be exposed,” Blake said. “Companies looking to use AI in-house must invest in the security and hygiene of their data to maintain cyber resilience in order to satisfy these consumers that are willing to vote with their purchases.”





Source link

Leave a Comment