Climate misinformation is rife on social media – and poised to get worse
Meta’s decision could open the floodgates to more climate misinformation on its apps, including misleading or out-of-context claims during disasters.
The decision by Meta, the parent company of Facebook and Instagram, to end its fact-checking program and otherwise reduce content moderation raises the question of what content on those social media platforms will look like going forward.
One worrisome possibility is that the change could open the floodgates to more climate misinformation on Meta’s apps, including misleading or out-of-context claims during disasters.
In 2020, Meta rolled out its Climate Science Information Center on Facebook to respond to climate misinformation. Currently, third-party fact-checkers working with Meta flag false and misleading posts. Meta then decides whether to attach a warning label to them and reduce how much the company’s algorithms promote them.
Meta’s policies have fact-checkers prioritizing “viral false information,” hoaxes and “provably false claims that are timely, trending and consequential.” Meta explicitly states that this excludes opinion content that does not include false claims.
The company will end its agreements with U.S.-based third-party fact-checking organizations in March 2025. The planned changes slated to roll out to U.S. users won’t affect fact-checking content viewed by users outside the U.S.. The tech industry faces greater regulations on combating misinformation in other regions, such as the European Union.
Fact-checking curbs climate misinformation
I study climate change communication. Fact-checks can help correct political misinformation, including on climate change. People’s beliefs, ideology and prior knowledge affect how well fact-checks work. Finding messages that align with the target audience’s values, along with using trusted messengers – like climate-friendly conservative groups when speaking to political conservatives – can help. So, too, does appealing to shared social norms, like limiting harm to future generations.
Heat waves, flooding and fire conditions are becoming more common and catastrophic as the world warms. Extreme weather events often lead to a spike in social media attention to climate change. Social media posting peaks during a crisis but drops off quickly.
Low-quality fake images created using generative artificial intelligence software, so-called AI slop, is adding to confusion online during crises. For example, in the aftermath of back-to-back hurricanes Helene and Milton last fall, fake AI-generated images of a young girl, shivering and holding a puppy in a boat, went viral on the social media platform X. The spread of rumors and misinformation hindered the Federal Emergency Management Agency’s disaster response.
What distinguishes misinformation from disinformation is the intent of the person or group doing the sharing. Misinformation is false or misleading content shared without active intention to mislead. On the other hand, disinformation is misleading or false information shared with the intent to deceive.
Disinformation campaigns are already happening. In the wake of the 2023 Hawaii wildfires, researchers at Recorded Future, Microsoft, NewsGuard and the University of Maryland independently documented an organized propaganda campaign by Chinese operatives targeting U.S. social media users.
To be sure, the spread of misleading information and rumors on social media is not a new problem. However, not all content moderation approaches have the same effect, and platforms are changing how they address misinformation. For example, X replaced its rumor controls that had helped debunk false claims during fast-moving disasters with user-generated labels, Community Notes.
False claims can go viral rapidly
Meta CEO Mark Zuckerberg specifically cited X’s Community Notes as an inspiration for his company’s planned changes in content moderation. The trouble is false claims go viral quickly. Recent research has found that the response time of crowd-sourced Community Notes is too slow to stop the diffusion of viral misinformation early in its online life cycle – the point when posts are most widely viewed.
In the case of climate change, misinformation is “sticky.” It is especially hard to dislodge falsehoods from people’s minds once they encounter them repeatedly. Furthermore, climate misinformation undermines public acceptance of established science. Just sharing more facts does not work to combat the spread of false claims about climate change.
Explaining that scientists agree that climate change is happening and is caused by humans burning greenhouse gases can prepare people to avoid misinformation. Psychology research indicates that this “inoculation” approach works to reduce the influence of false claims to the contrary.
That’s why warning people against climate misinformation before it goes viral is crucial for curbing its spread. Doing so is likely to get harder on Meta’s apps.
Social media users as sole debunkers
With the coming changes, you will be the fact-checker on Facebook and other Meta apps. The most effective way to pre-bunk against climate misinformation is to lead with accurate information, then warn briefly about the myth – but only state it once. Follow this with explaining why it is inaccurate and repeat the truth.
During climate change-fueled disasters, people are desperate for accurate and reliable information to make lifesaving decisions. Doing so is already challenging enough, like when the Los Angeles County’s emergency management office erroneously sent an evacuation alert to 10 million people on Jan. 9, 2025.
Crowd-sourced debunking is no match for organized disinformation campaigns in the midst of information vacuums during a crisis. The conditions for the rapid and unchecked spread of misleading, and outright false, content could get worse with Meta’s content moderation policy and algorithmic changes.
The U.S. public by and large wants the industry to moderate false information online. Instead, it seems that big tech companies are leaving fact-checking to their users.
Jill Hopke does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Read These Next
Biden helped bring science out of the lab and into the community − emphasizing research focused on s
Biden’s legacy includes elevating science’s influence in federal decision-making and considering…
David Lynch exposed the rot at the heart of American culture
When Lynch’s films were first released, they seemed to be funhouse-mirror reflections of society.…
News coverage boosts giving after disasters – Australian research team’s findings may offer lessons
All told, donors gave more than US$397 million to support recovery from the ‘black summer’ bushfire…