- VMware Product Release Tracker (vTracker)
- VMware Product Release Tracker (vTracker)
- VMware Product Release Tracker (vTracker)
- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
Clickbait and Switch: How AI Makes Disinformation Go Viral | McAfee Blog
Bad news travels quickly. Or so goes the old saying. Yet we do know this: disinformation and fake news spread faster than the truth. And what makes it spread even faster is AI.
A recent study on the subject shows that fake news travels across the internet than stories that are true. Complicating matters is just how quickly and easily people can create fake news stories with AI tools.
Broadly speaking, AI-generated content has flooded the internet in the past year — an onrush of AI voice clones, AI-altered images, video AI deepfakes, and all manner of text in posts. Not to mention, entire websites are populated with AI-created content.
One set of published research shows how this glut of AI-created content has grown since AI tools started becoming publicly available in 2023. In just the first three months of 2024, one set of research suggests that the volume of deepfakes worldwide surged by 245% compared to the start of 2023. In the U.S., that figure jumped to 303%.[i]
But before we dive into the topic, we need to make an important point — not all AI-generated content is bad. Companies use AI deepfake technologies to create training videos. Studios use AI tools to dub movies into other languages and create captions. And some content creators just want to get a laugh out of Arnold Schwarzenegger singing show tunes. So, while deepfakes are on the rise, not all of them are malicious.
The problem arises when people use deepfakes and other AI tools to spread disinformation. That’s what we’ll focus on here.
First, let’s look at what deepfakes are and what disinformation really is.
What is a deepfake?
First, what is a deepfake? One dictionary definition of a deepfake reads like this:
An image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.[ii]
Looking closely at that definition, three key terms stand out: “altered,” “manipulated,” and “misrepresent.”
Altered
This term relates to how AI tools work. People with little to no technical expertise can tamper with existing source materials (images, voices, video) and create clones of them.
Manipulated
This speaks to what can be done with these copies and clones. With them, people can create entirely new images, tracts of speech, and videos.
Misrepresent
Lastly, this gets to the motives of the creators. They might create a deepfake as an obvious spoof like many of the parody deepfakes that go viral. Or maliciously, they might create a deepfake of a public official spewing hate speech and try to pass it off as real.
Again, not all deepfakes are malicious. It indeed comes down to what drives the creator. Does the creator want to entertain with a gag reel or inform with a how-to video narrated by AI? That’s fine. Yet if the creator wants to besmirch a political candidate, make a person look like they’ve said or done something they haven’t, or to pump out false polling location info to skew an election, that’s malicious. They clearly want to spread disinformation.
What is disinformation — and misinformation?
You might see and hear these terms used interchangeably. They’re different, yet they’re closely related. And both will play a role in this election.
Disinformation is intentionally spreading misleading info.
Misinformation is unintentionally spreading misleading info (the person sharing the info thinks it’s true).
This way, you can see how disinformation spreads. A bad actor posts a deepfake with misleading info — a form of disinformation. From there, others take the misleading info at face value, and pass it along as truth — a form of misinformation.
The two work hand-in-hand by design, because bad actors have a solid grasp on how lies spread online.
How do deepfakes spread?
Deepfakes primarily spread on social media. And disinformation there has a way of spreading quickly.
Researchers found that disinformation travels deeper and more broadly, reaches more people, and goes more viral than any other category of false info.[iii]
According to the research findings published in Science,
“We found that false news was more novel than true news, which suggests that people were more likely to share novel information … Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”
Thus, bad actors pump false info about them into social media channels and let people spread it by way of shares, retweets, and the like.
And convincing deepfakes have only made it easier for bad actors to spread disinformation.
How AI tools supercharge the spread of disinformation and “fake news.”
The advent of AI tools has spawned a glut of disinformation unseen before, and for two primary reasons:
- Bogus articles, doctored photos, and fake news sites once took time and effort to cook up. Now, they take seconds.
- AI tools can effectively clone voices and people to create convincing-looking deepfakes in digital form.
In effect, the malicious use of AI makes it easier for fakery to masquerade as reality, with chilling authenticity that’s only increasing. Moreover, it churns out fake news on a massive scope and scale that’s increasing rapidly, as we cited above.
AI tools can certainly create content quickly, but they also do the work of many. What once took sizable ranks of writers, visual designers, and content producers to create fake stories, fake images, and fake videos now gets done with AI tools. Also as mentioned above, we’re seeing entire websites that run on AI-generated content, which then spawn social media posts that point to their phony articles.
Clickbait and switch — the “Disinformation Economy”
Largely we’ve talked about disinformation, fake news, and deepfakes in the context of politics and in attempts to mislead people. Yet there’s another thing about malicious deepfakes and the bad news they peddle. They’re profitable.
Bad news gets clicks, and clicks generate ad revenue. Now with AI powering increasingly high volumes of clickbait-y bad news, it’s led to what some researchers have coined the “Disinformation Economy.” This means that the creators of some deepfakes might not be politically motivated at all. They’re in it just for the money. The more people who fall for their fake stories, the more money they make as people click.
And early indications show that disinformation has broader economic effects as well.
Researchers at the Centre for Economic Policy Research (CEPR) in Europe have started exploring the impact of fake news on economic stability. In their first findings, they said, “Fake news profoundly influences economic dynamics.”[iv] Specifically they found that as fake news sows seeds of uncertainty, it reverberates through the economy, leading to increased unemployment rates and lower industrial production.
They further found bad news can lead to pessimism, particularly about the economy, which leads to people spending less and lower sales for companies — which further fuels unemployment and reductions in available jobs as companies cut back.[v]
Granted, these early findings beg more research. Yet we can say this: many people turn to social media for their news, the place where fake news and malicious deepfakes spread.
Global research from Reuters uncovered that more people primarily get their news from social media (30%) rather than from an established news site or app (22%).[vi] This marks the first time that social media has toppled direct access to news. Now, if that leads to exposure to significant portions of pessimistic fake news, it makes sense that millions of people could have their perceptions altered by it to some extent — which could translate into some form of economic impact.
Stopping the spread of disinformation and malicious deepfakes
As you can quickly surmise, that comes down to us. Collectively. The fewer people who like and share disinformation and malicious deepfakes, the quicker they’ll die off.
A few steps can help you do your part in curbing disinformation and malicious deepfakes …
Verify, then share.
This all starts by ensuring what you’re sharing is indeed the truth. Doubling back and doing some quick fact-checking can help you make sure that you’re passing along the truth. Once more, bad actors entirely rely on just how readily people can share and amplify content on social media. The platforms are built for it. Stop and verify the truth of the post before you share.
Come across something questionable? You can turn to one of the several fact-checking organizations and media outlets that make it their business to separate fact from fiction:
Flag falsehoods.
If you strongly suspect that something in your feed is a malicious deepfake, flag it. Social media platforms have reporting mechanisms built in, which typically include a reason for flagging the content.
Get yourself a Deepfake Detector.
Our new Deepfake Detector spots AI phonies in seconds. It works in the background as you browse — and lets you know if a video or audio clip was created with AI audio. All with 95% accuracy.
Deepfake Detector monitors audio being played through your browser to determine if the content you’re watching or listening to contains AI-generated audio. McAfee doesn’t store any of this audio or browsing history.
Further, a browser extension shows just how much audio was deepfaked, and at what point in the video that content cropped up.
McAfee Deepfake Detector is available for English language detection in select new Lenovo AI PCs, ordered on Lenovo.com and select local retailers in the U.S., UK, and Australia.
Stopping deepfakes really comes down to us
From January to July of 2024, states across the U.S. introduced or passed 151 bills that deal with malicious deepfakes and deceptive media.[vii] However, stopping their spread really comes down to us.
The people behind AI-powered fake news absolutely rely on us to pass them along. That’s how fake news takes root, and that’s how it gets an audience. Verifying that what you’re about to share is true is vital — as is flagging what you find to be untrue or questionable.
Whether you use fact-checking sites to verify what you come across online, use a tool like our Deepfake Detector, or simply take a pass on sharing something that seems questionable, they’re all ways you can stop the spread of disinformation.
[i] https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
[ii] https://www.merriam-webster.com/dictionary/deepfake
[iii] https://science.sciencemag.org/content/359/6380/1146
[iv] https://cepr.org/voxeu/columns/buzz-bust-how-fake-news-shapes-business-cycle
[v] https://www.uni-bonn.de/en/news/134-2024
[vi] https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2023/dnr-executive-summary
[vii] Ibid.