- Perimeter Security Is at the Forefront of Industry 4.0 Revolution
- Black Friday sales just slashed the Apple Watch SE (2nd Gen) to its lowest price ever
- Get an Apple Watch Series 10 for $70 off for the first time ahead of Black Friday
- The 15 best Black Friday Target deals 2024
- This fantastic 2-in-1 laptop I tested is highly recommended for office workers (and it's on sale)
Humans Need to Rethink Trust in the Wake of Generative AI
As generative AI rapidly evolves, one of the biggest risks that is being discussed is the potential for the technology to be used to generate disinformation and misinformation. This means that humans need to rethink how and what we trust.
Of 2300 digital trust professionals surveyed by ISACA in its Generative AI Survey, 77% said the top risk posed by generative AI today is misinformation and disinformation.
The top five concerns surrounding Generative AI were:
- Disinformation/misinformation (77%)
- Privacy violations (68%)
- Social engineering (63%)
- Loss of intellectual property (58%)
- Job displacement (35%)
Deepfakes, used to spread dis- and misinformation, are used to alter a person’s likeness in a photo or video clip, generating entirely new content that appears quite realistic.
Communicating via video or audio is a much faster way to get information across than text and there is a risk that AI-generated video and voice communications will spread content that is untruthful or is used to trick victims into taking actions.
Chris Dimitriadis, Global Chief Strategy Officer at ISACA, said during the association’s Digital Trust Summit in Dublin, Ireland: “Pictures are worth a thousand words, and we’re not trained to question what we see. We’re only trained to question what were hear or read so this is a new advent for the human race, to question what we see as being legitimate or not.”
In Summer 2023, UK TV personality and financial expert, Martin Lewis, spoke out about a deepfake likeness of himself promoting an investment scam was published on Facebook.
Meanwhile, a recent study by the University College London found that humans are unable to detect deepfake speech 27% of the time.
Learning to Trust Again
Speaking to Infosecurity, Enrique Perez, strategic communication specialist with NATO, said: “The problem is you cannot believe anything any more even though you are seeing and hearing it. It is a trust issue, and we have to learn to trust again.”
Perez called on organizations to work together to combat today’s evolving cybersecurity challenges.
“Nobody can act alone, we need each other, and the sharing of information is needed,” he said.
Out of the 334 business and IT professionals working in Europe surveyed, 99% say they are worried, to some extent, about the potential exploitation of generative AI by bad actors. Furthermore, 74% believe that cybercriminals are harnessing AI with equal or even greater success than digital trust professionals.
Read more: AI to Create Demand for Digital Trust Professionals