There’s brewing controversy around Meta’s game-changing decision to end its third-party fact-checking program. This decision, experts warn, could potentially turn cyberspace into a breeding ground for disinformation and hate to thrive, eventually seeping into the real world.
Photo by Priscilla Du Preez 🇨🇦 on Unsplash
Meta’s New Approach: Crowd-Sourced Content Moderation
Until recently, Meta employed a fleet of independent fact-checkers around the globe to identify and scrutinize misinformation across its array of social media platforms. However, this model is being phased out and replaced by a crowd-sourced approach akin to the one used by Community Notes.
Photo by Priscilla Du Preez 🇨🇦 on Unsplash
Meta’s decision effectively transfers the responsibility of sifting through untruths on Facebook, Instagram, WhatsApp, and Threads to its users. A shift that has sparked anxiety over the potential increase in the spread of misleading content concerning diverse topics, including climate change, clean energy, public health risks, and marginalized groups at higher risk of violence.
Positive Impact of Fact-Checking Program: Hindered Spread of Hoaxes
The end of the fact-checking program could cause more harm than good, especially to Meta’s users. Angie Drobnic Holan, the director of the International Fact-Checking Network (IFCN) at Poynter, argued, “The program worked well at reducing the virality of hoax content and conspiracy theories.”
Photo by Hartono Creative Studio on Unsplash
Many critics have raised concerns about the effectiveness of the Community Notes-style of moderation, citing it as merely a facade. This calls into question Meta’s real intentions of tackling the prevalent issue of disinformation on its platforms and the pressures faced by users to distinguish facts from falsehoods.
Fact-Checkers and Meta: A Fair Partnership?
Meta collaborated with IFCN-certified fact-checkers who adhered to both the network’s Code of Principles and Meta’s policies. Fact-checkers analyzed content and rated its accuracy, while the decision to restrict content circulation or removal rested with Meta, not the fact-checkers. This system functioned as a deterrent against the spread of fallacies, according to Holan.
Photo by Sergey Zolkin on Unsplash
Inevitable Consequences of Meta’s Shift in Policy
Not long after Meta’s announcement, critics voiced concerns about possible offline implications of the decision. Imran Ahmed, founder and CEO of the Center for Countering Digital Hate, describes it as a “huge step back for online safety, transparency, and accountability.”
Photo by Sergey Zolkin on Unsplash
The shift in policy is worrisome for environmental organizations and scientists alike. Kate Cell, senior climate campaign manager at the Union of Concerned Scientists, raised concerns over the possibility of anti-scientific content proliferating on Meta’s platforms.
Final Thoughts
Meta’s decision reflects the delicate balance between upholding freedom of speech and curbing the spread of false information online. Nevertheless, the responsibility of mitigating the effects of an increasingly disinformation-ridden digital landscape should not fall solely on the shoulders of consumers but should be a collective effort between tech companies and users alike.
Photo by Ales Nesetril on Unsplash