
Meta, the parent company of Facebook and Instagram, announced it will discontinue its third-party fact-checking program in favor of a user-driven “Community Notes” model.
Drawing inspiration from Elon Musk’s approach on social media platform X, this move signals Meta’s departure from traditional methods of moderating misinformation, starting in the United States.
The decision to end its fact-checking partnership with independent third parties stems from Meta’s belief that reliance on external experts introduced bias into content evaluations.
In an official statement, the company explained, “Fact-checkers bring valuable expertise but often carry their own perspectives, which has led to controversy and dissatisfaction among users. Our pivot to Community Notes enables a broader, more democratic solution.”
Community Notes, which Meta promises to roll out gradually, will allow users to annotate and provide context to posts. This crowdsourced moderation model seeks to empower individuals to actively participate in curbing misinformation, leveraging diverse insights rather than centralized authority.
Meta’s overhaul doesn’t end with its moderation strategy. In a controversial move, the company announced plans to relax restrictions on some topics within mainstream discussions.
While Meta did not specify which topics would face fewer restrictions, it clarified that it aims to prioritize enforcement against “high-severity violations” such as terrorism, child exploitation, and illicit drug-related content.
Nick Clegg, President of Global Affairs at Meta, explained, “We are refocusing our efforts to protect users from genuinely harmful threats while fostering a space where lawful, even if contested, ideas can thrive. Speech is essential, but safety is our utmost priority.”
The new strategy echoes broader debates about free speech versus content moderation, highlighting tech companies’ struggles to strike a balance in the digital age.
As Meta redefines its stance, questions remain about the effectiveness of its user-driven model in tackling misinformation, and whether this evolution will foster greater trust—or chaos—within its platforms.