Over a Third of Employees Secretly Sharing Work Info with AI


More than a third (38%) of employees share sensitive work information with AI tools without their employer’s permission, according to new research by CybSafe and the National Cybersecurity Alliance (NCA).

The report found that this behavior was particularly prominent among younger generations.

Around half (46%) of Gen Z and 43% of millennials surveyed admitted sharing sensitive work information with such tools without their employer’s knowledge.

CybSafe surveyed over 7000 individuals across the US, UK, Canada, Germany, Australia, India and New Zealand as part of its research.

The survey also found that 52% of employed participants revealed they have not yet received any training on safe AI use.

Additionally, 58% of students have not received such training while 84% of those not actively employed and 83% of retirees have not undergone AI training.

Oz Alashe, CEO and Founder of CybSafe, commented: “The introduction of AI has created a whole new category of security behaviours for CISOs and business leaders to be concerned with. While the security community is well aware of the threats posed by AI, it’s clear this awareness has not yet translated into consistent security behaviors within the workforce.”

AI Poses “Biggest Ever” Risks

Ronan Murphy, Member of the AI Advisory Council for the Government of Ireland, told Infosecurity that AI tools’ accessing organizational data represents the biggest risk that any industry has ever faced in regard to cybersecurity, governance and compliance.

“If you feed an AI model with all your IP, then anybody with access to it can ask it to spill the beans,” he explained.

Murphy added: “In order to embrace AI and drive operational efficiency, organizations need to make sure that the foundation layer, which is your data, is properly sanitized before it goes into any of these AI applications.”

Concern and Lack of Trust in AI Prevalent

Two-thirds (65%) of respondents also expressed concern about AI-related cybercrime, such as leveraging these tools create more convincing phishing emails.

Over half (52%) believe AI will make it harder to detect scams and 55% said this technology will make it more difficult to be secure online.

Additionally, a significant proportion of people expressed a lack of trust about company’s implementation and use of AI. A similar amount said they had high trust in organizations’ implementation of AI (36%) versus those with low trust (35%).

The remainder (29%) had a neutral stance.

Around a third (36%) believe companies are ensuring that AI technologies are free of bias while 30% remain unconvinced.

There was also an even split in respondents’ level of confidence in their ability to recognize AI-generated content, with 36% expressing high confidence and 35% low confidence.

Worryingly, 36% believe it’s likely AI will influence their decisions on what is real and fake during election campaigns.



Source link