How new Facebook policies incentivize spreading misinformation


NurPhoto/Getty Images

The consequences of Meta’s recent content moderation change are starting to reveal themselves.

Last month, Meta announced it would roll back its fact-checking program on Facebook, Instagram, and Threads starting this spring in favor of a Community Notes approach, where individual users volunteer to comment on posts with additional context or differing information. As with X’s Community Notes program, the requirements for what volunteers need to include in a note are slim compared to actual fact-checking; they simply need to follow Meta’s Community Standards, stay under 500 characters, and include a link. 

Also: How to delete Facebook, Messenger, or Instagram – if you want Meta out of your life

Meta will remain the authority on content that falls into illegal territory, including fraud, child sexual exploitation, and scams. This leaves contentious, misleading, and AI-generated content that falls outside those categories in a gray area, with little quantifiable oversight. 

On Monday, ProPublica published an analysis that pointed out another change: In October, Meta launched a new monetization program that resurfaces the Performance Bonus, a program offering cash for posts that reach certain engagement metrics. Though it’s been invitation-only for creators thus far, it will expand its availability sometime this year. 

In the past, Meta has not rewarded content flagged by fact-checkers; however, ProPublica notes, that policy won’t matter once those flags cease to exist. This effectively incentivizes users to create viral “hoax” content for money — though Meta did say it “may still reduce the distribution of certain hoax content whose spread creates a particularly bad user or product experience (e.g., certain commercial hoaxes).”

Also: How to become a Meta Community Notes editor

As an example of what this could amplify, ProPublica found 95 Facebook pages “that regularly post made-up headlines designed to draw engagement — and, often, stoke political divisions,” which it noted were primarily managed by people outside the US for a collective audience of over 7.7 million followers. Upon review, Meta told ProPublica it had removed 81 of these pages, but did not confirm whether they were receiving viral content payouts. 

It’s unclear whether Meta will somehow graft a version of that no-payout policy onto the Community Notes program; with such different evaluation criteria, it’s hard to see how that would work. 

While the Cambridge Analytica scandal of 2018 centered on the manipulation of accessible Facebook user data, it also revealed the ease with which targeted campaigns, regardless of factuality, can circulate on social platforms. Social media companies’ use of personalized algorithms makes this especially effective. 

Also: I tested 10 AI content detectors – and these 3 correctly identified AI text every time

Recently, xAI’s Grok chatbot was caught apparently suppressing unfavorable information about Elon Musk and President Trump in responses to users. OpenAI recently updated its Model Spec to allow ChatGPT to engage with queries it previously wouldn’t have. The Trump administration is in the process of diminishing the powers of US AI regulatory bodies, which monitor AI companies and tools for safety and proper use.

While these are discreet instances, they are also related shifts in a web of internet tools from which many US citizens get most or all of their information; as Pew Research found, one in five adults in the US get their content from “news influencers” (but can’t exactly name them, either). 

The Grok incident spotlights how these systems can be manipulated for individual interests, even as tech companies claim to be creating more “intellectual freedom” and reducing censorship. 

Also: Yikes: Jailbroken Grok 3 can be made to say and reveal just about anything

Social media has never been an airtight source of information, and studies have identified limits to the effectiveness of fact-checking on social platforms. Even so, this shift could further deepen the information quality divide. Putting the onus to verify posts even more on the user poses a unique threat because of the cavernous media literacy hole in the US. It could also force more reliable information behind barriers, including paywalls. 

Whether a user sees actual news in their feed — or pays attention to the Community Notes on a post — will depend on how that content competes in Meta’s algorithm, where it will increasingly be up against incentivized, inflammatory posts.





Source link

Leave a Comment