Climate misinformation tactics are more effective than ever.

4.13 million USD is how much fossil fuel companies spent on greenwashing ads on Facebook before and during COP 27 in Egypt last year. Their ads gained 246 million impressions on Facebook and contained misleading or directly false claims on the climate crisis, fossil fuels and net-zero targets. The number would be higher, but scientists' access to datasets in Metas Ad Library is strictly limited to non-business ads. (Source)

This ad was in Life Magazine in 1959 even though US oil companies had been informed about global heating three years earlier. 

More effective than ever 

Misinformation campaigns by big oil companies are not new, according to researchers at Harvard University, Geoffrey Supran and Naomi Oreskes. For example, in 1959, this ad was in Life Magazine even though US oil companies had been informed about global heating three years earlier. However, today, the speed and efficiency of these misinformation campaigns are on a whole other level than in 1959. (Source.)

Here are factors that contribute to increased climate misinformation and disinformation. 

Big Tech loves engaging content.

Big tech platforms are reluctant to make more effort. Ads and posts with engaging and polarising views get more traffic, translating into more ad revenue. For example, the hashtag #climateScam is used by a marginal group of climate deniers. When scientists monitored X(formerly Twitter) for five days, they saw that every time they wrote "cl" and searched for climate information, this hashtag was on top of the search results, demonstrating the pervasive influence of misleading content even by marginal groups. (Source)

New developments in AI - not your ordinary newspaper 

New developments in AI make it much easier to produce content, adapt the language, create personas to target different population groups and automate the spread of misleading content across the Internet. Tactics often used and becoming more sophisticated with new advancements in AI are troll farms, bots, harassment, and hacking. For example, climate change discussions often face suppression strategies that impede public participation and subject scientists to harassment. (Source)

Hire a "PR firm" 

Anyone can run a false news campaign, and many are doing so. So-called communication firms that create disinformation operate in at least 48 countries, according to the Oxford Internet Institute in 2020. Researchers call it computational propaganda, often done on a political actor's behalf. They found that almost US $60 million was spent on hiring these firms since 2009-2020. (Source)It is a conservative number because, in addition, you have many more firms in the grey sone of misinformation and disinformation that create ads and narratives that do not directly deny climate change but focus on delays and postponing of needed actions to combat climate change and beautifying the effects of the fossil fuel industries. 

Next steps - How can you detect climate disinformation?

Public/Private Sector

If you are in the public or private sector, we recommend monitoring false, harmful narratives online in forums and high-trafficked websites. Spikes in the amount of disinformation usually happen before large conferences like COP and local events. Suppose you are working on approving a certain policy related to climate. In that case, you will also see an increase in false articles and posts to muddy the waters and boost public opinion either against or for. Monitoring can be done manually or automatically through several platforms like ours, Factisearch.

It is a database and a search engine, Factisearch, where you get updates on the latest trending false facts worldwide. Fact-checks often focus on the viral false videos, pictures and narratives spreading online, and expert fact-checkers debunk this type of news, explaining why it is false. We monitor these news outlets every hour in 40 languages. Sign up here: Factisearch.

Newsrooms

As an editor, you receive opinion articles on many different topics, and it is hard to assess the writer's agenda quickly and whether their information is credible. With new tools like Bard and ChatGPT, crafting many different opinion articles is easier than ever. For that, you can use Factiverse AI Editor to analyse the content of the articles you receive and detect possible factual mistakes. We will search Google, Bing and Semantic Scholar simultaneously to find relevant sources to help you understand complex topics in a few seconds and decide whether this is an opinion article worth publishing. You can copy-paste your opinion article and run a check.

It also works in ChatGPT, where you can fact-check the output of the chatbot and detect bias and hallucinations, or upload your pdf and analyse it with our plugin. Just go to gpt.factiverse.ai and start using it on a free 2-week trial. Here is a short video tutorial.

Big thank you to Rogaland Forskningsfond for the support in our pilot project on climate and sustainability disinformation. More updates

Previous
Previous

2023 Fact-Checks in Review

Next
Next

Big milestone! The Associated Press interviewed Factiverse.