The ABCs of AI and Environmental Misinformation

DeSmog talks with AI researchers about the risks of widespread fake climate content and how to combat it.
Analysis
Stella Levantesi
Stella Levantesi
on
Series: Gaslit
Researchers found GPT-fabricated content in Google Scholar mimicking scientific papers on issues including the environment, health, and computing. Credit: DeSmog

โ€œSo, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it.โ€

This is what โ€œBardโ€ told researchers in 2023. Bard by Google is a generative artificial intelligence chatbot that can produce human-sounding text and other content in response to prompts or questions posed by users. 

But if AI can now produce new content and information, can it also produce misinformation? Experts have found evidence. 

In a study by the Center for Countering Digital Hate, researchers tested Bard on 100 false narratives on nine themes, including climate and vaccines, and found that the tool generated misinformation on 78 out of the 100 narratives tested. According to the researchers, Bard generated misinformation on all 10 narratives about climate change.

In 2023, another team of researchers at Newsguard, a platform providing tools to counter misinformation, tested OpenAIโ€™s Chat GPT-3.5 and 4, which can also produce text, articles, and more. According to the research, ChatGPT-3.5 generated misinformation and hoaxes 80 percent of the time when prompted to do so with 100 false narratives, while ChatGPT-4 advanced all 100 false narratives in a more detailed and convincing manner. NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, and created responses in the form of news articles, Twitter threads, and even TV scripts imitating specific political ideologies or conspiracy theorists.

โ€œI think this is important and worrying, the production of fake science, the automation in this domain, and how easily that becomes integrated into search tools like Google Scholar or similar ones,โ€ said Victor Galaz, deputy director and associate professor in political science at the Stockholm Resilience Centre at Stockholm University in Sweden. โ€œBecause then thatโ€™s a slow process of eroding the very basics of any kind of conversation.โ€

In another recent study published this month, researchers found GPT-fabricated content in Google Scholar mimicking legitimate scientific papers on issues including the environment, health, and computing. The researchers warn of โ€œevidence hacking,โ€ the โ€œstrategic and coordinated malicious manipulation of societyโ€™s evidence base,โ€ which Google Scholar can be susceptible to.

So, we know that AI can generate misinformation but to what extent is this an issue?

Letโ€™s start with the basics.

Drilling Down on AI and the Environment

Letโ€™s take ChatGPT, for example. ChatGPT is a Large Language Model or LLM. 

LLMs are among the AI technologies that are most relevant to issues of misinformation and climate misinformation, according to Asheley R. Landrum, an associate professor at the Walter Cronkite School of Journalism and Mass Communication and a senior global futures scientist at Arizona State University.

Because LLMs can create text that appears to be human generated, which can be used to create misinformation quickly and at a low cost, malicious actors can โ€œexploitโ€ LLMs to create disinformation with a single prompt entered by a user, said Landrum in an email to DeSmog.

Together with LLMs, synthetic media, social bots, and algorithms are also AI technologies that are relevant in the context of all types of misinformation, including on climate.

โ€œSynthetic media,โ€ which includes the so-called โ€œdeep fakes,โ€ is content that is produced or modified using AI.

โ€œOn one hand, we can be concerned that people will believe that synthetic media is real. For example, when a robocall mimicking Joe Bidenโ€™s voice told people not to vote in the Democratic primary in New Hampshire,โ€ Landrum wrote in her email. โ€œAnother concern, and one I find more problematic, is that the mere existence of deep fakes allows public figures and their audiences to dismiss real information as fake.โ€ 

Synthetic media also includes photos. In March 2023, the Texas Public Policy Foundation, a conservative think tank that advances climate change denial narratives, used AI to create an image of a dead whale and wind turbines, and weaponized it to promote disinformation on renewable energy. 

Social bots, another technology that can spread misinformation, use AI to create messages that appear to be written by people and work autonomously on social media platforms like X.

โ€œSocial bots actively amplify misinformation early on before a post officially โ€˜goes viral.โ€™ And they target influential users with replies and mentions,โ€ Landrum explained. โ€œFurthermore, they can engage in elaborate conversations with humans, employing personalized messages aiming to alter opinion.โ€

Last but not least, algorithms. These filter audiencesโ€™ media and information feeds based on what is expected to be the most relevant to a user. Algorithms use AI to curate highly personalized content for users based on behavior, demographics, preferences, etc. 

 โ€œThis means that the misinformation that you are being exposed to is misinformation that will likely resonate with you,โ€ Landrum said. โ€œIn fact, researchers have suggested that AI is being used to emotionally profile audiences to optimize content for political gain.โ€ 

Research shows that AI can easily create targeted, effective information. For example, a study published in January found that political ads tailored to individualsโ€™ personalities are more persuasive than non-personalized ads. The study says that these can be automatically generated on a large scale, highlighting the risks of using AI and โ€œmicrotargetingโ€ to craft political messages that resonate with individuals based on their personality traits.

So, once misinformation or disinformation (deliberate and intentional) content exists, it can be spread through โ€œthe prioritization of inflammatory content that algorithms reward,โ€ as well as bad actors, according to a report on the threats of AI to climate published in March by the Climate Action Against Disinformation (CAAD) network.

โ€œMany now are โ€ฆ questioning AIโ€™s environmental impact,โ€ Michael Khoo, climate disinformation program director at Friends of the Earth and lead co-author of the CAAD report, told DeSmog. The report also states that AI will require massive amounts of energy and water: On an industry-wide level, the International Energy Agency estimates the energy use for electricity consumption from global data centers that power AI will double in the next two years, consuming as much energy as Japan. These data centers and AI systems also use large amounts of water for their operations and are often located in areas that already face water shortages, the report says.

Khoo said the biggest danger overall from AI is that itโ€™s going to โ€œweaken the information environment and be used to create disinformation which then can be spread on social media.โ€ 

Some experts share this view, while others are more cautious to denounce the connection between AI and climate misinformation, based on the fact that it is still unknown if and how this is affecting the public. 

A โ€˜Game-Changerโ€™ for Misinformation

โ€œAI could be a major game changer in terms of the production of climate misinformation,โ€  Galaz told DeSmog. All the costly aspects of AI, like producing messages that are targeting a specific type of audience through political predisposition or psychological profiling, and creating very convincing material, not only text, but also images and videos, โ€œcan now be produced at a very low cost.โ€

Itโ€™s not just about cost. Itโ€™s also about volume.

โ€œI think volume in this context matters, it makes your message easier to get picked up by someone else,โ€ Galaz said. โ€œSuddenly we have a massive challenge ahead of us dealing with volumes of misinformation flooding social media and a level of sophistication that [makes] it very difficult for people to see,โ€ he added.

Galazโ€™s work, together with researchers Stefan Daume and Arvid Marklund at the Stockholm Resilience Centre, also points to three other main characteristics of AIโ€™s capacity to produce information and misinformation: accessibility, sophistication, and persuasion.

โ€œAs we see these technologies evolve, they become more and more accessible. That accessibility makes it easier to produce a mass volume of information,โ€ Galaz said. โ€œThe sophistication [means] itโ€™s difficult for a user to see whether something is generated by AI compared to a human. And [persuasion], prompts these models to produce something that is very specific to an audience.โ€

โ€œThese three in combination to me are warning flags that we might be facing something very difficult in the future.โ€

According to Landrum, AI undoubtedly increases the quantity and amplification of misinformation, but this may not necessarily influence public opinion.

AI-produced and AI-spread climate misinformation may also be more damaging and get picked up more when climate issues are at the center of the international public debate. This is not surprising considering it has been a well-known pattern for climate change denial, disinformation, and obstruction in recent decades.

โ€œThere is not yet a lot of evidence that suggests people will be influenced by [AI misinformation]. This is true whether the misinformation is about climate change or not,โ€ Landrum said. โ€œIt seems likely to me that climate dis/misinformation will be less prevalent than other types of political dis/misinformation until there is a specific event that will likely bring climate change to the forefront of peopleโ€™s attention, for example, a summit or a papal encyclical.โ€ 

Galaz echoed this, underscoring that thereโ€™s still only experimental evidence of AI misinformation leading to impacts on climate opinion, but also reiterated that the context and the capacities of these models at the moment are a worry.

Volume, accessibility, sophistication, and persuasion all interact with another aspect of AI: the speed at which it is developing.

โ€œScientists are trying to catch up with technological changes that are much more rapid than our methods are able to assess. The world is changing more rapidly than weโ€™re able to study it,โ€ said Galaz. โ€œPart of that is also getting access to data to see whatโ€™s happening and how itโ€™s happening and that has become more difficult lately on platforms like X [since] Elon Musk.โ€

AI Tools for Climate: Solutionsย 

Scientists and tech companies are working on AI-based methods for combating misinformation, but Landrum says they arenโ€™t โ€œthereโ€ yet.

Itโ€™s possible, for example, that AI chatbots/social bots could be used to provide accurate information. But the same principles of motivated reasoning that influence whether people are affected by fact checks are likely to affect whether people will engage with such chatbots; that is, if they are motivated to reject the information โ€“ to protect their identity or existing worldviews โ€“ they will find reasons to reject it, Landrum explained.

Some researchers are trying to develop machine learning tools to recognize and debunk climate misinformation. John Cook, a senior research fellow at the Melbourne Centre for Behaviour Change at the University of Melbourne, started working on this before generative AI even existed.  

โ€œHow do you generate an automatic debunking once youโ€™ve detected misinformation? Once generative AI exploded, it really opened up the opportunity for us to complete our task of automatic debunking,โ€ Cook told DeSmog. โ€œSo thatโ€™s what weโ€™ve been working on for about a year and a half now โ€“ detecting misinformation and then using generative AI to actually construct the debunking [that] fits the best practices from the psychology research.โ€ 

The AI model being developed by Cook and his colleagues is called CARDS. It operates following a structure of โ€œfact-myth-fallacy-fact debunking,โ€ which means, first, identify the key fact that replaces the myth. Second, identify the fallacy that the myth commits. Third, explain how the fallacy misleads and distorts the facts. And finally, โ€œwrapping it all together,โ€ said Cook. โ€œThis is a structure we recommend in the debunking handbook, and none of this would be possible without generative AI,โ€ he added.

But there are challenges with developing this tool, including the fact that LLMs can sometimes, as Cook said, โ€œhallucinate.โ€

He said that to resolve this issue, his team put  a lot of โ€œscaffoldingโ€ around the AI prompts, which means adding tools or outside input to make it more reliable. He developed a model called FLICC โ€“ based on five techniques to combat climate denial โ€“ โ€œso that we could detect the fallacies independently and then use that to inform the AI prompts,โ€ Cook explained. Having to add a lot of tools counteracts the problem of AI just generating misinformation or hallucinating, he said. โ€œSo to obtain the facts in our debunkings, weโ€™re also pulling from a wide list of factful, reliable websites. Thatโ€™s one of the flexibilities you have with generative AI, you can [reference] reliable sources if you have to.โ€

The applications for this AI tool range from a chatbot or social bot to an app, a semi-automated semi human interactive tool, or even a webpage and newsletter.

Some of the AI tools also come with their own issues. โ€œUltimately what weโ€™re going to do as we are developing the model is do some stakeholder engagement, talk to journalists, fact checkers, educators, scientists, climate NGOs, anyone who might potentially use this kind of tool and talk to them about how they might find it useful,โ€ Cook said.

According to Galaz, one of AIโ€™s strengths is analyzing and understanding patterns in massive amounts of data, which can help people, if developed responsibly. For example, combining AI with local knowledge about agriculture can help farmers in the wake of climate alterations, including soil depletion. 

This can only work if the AI industry is held accountable, experts say. Cook worries that regulation is crucial and that it is difficult to get in place.

โ€œThe technology is moving so quickly that even if you are going to try to get government regulation, governments are typically slow moving in the best of conditions,โ€ Cook points out. โ€œWhen itโ€™s something this fast, theyโ€™re really going to struggle to keep up. [Even scientists] are struggling to keep up because the sands are shifting underneath our feet as the research, the models, and the technology are changing as we are working on it,โ€ he added.

Regulating AIย 

Scholars mostly agree that AI needs to be regulated.  

โ€œAI is always spoken about in this very lighthearted, breathy terms of how itโ€™s going to save the planet,โ€ said Khoo. โ€œBut right now [AI companies] are avoiding accountability, transparency and safety standards that we wanted in social media tech policy around climate.โ€ 

Both in the CAAD report and the interview with DeSmog, Khoo warned about the need to avoid repeating the mistakes of policymakers who didnโ€™t regulate social media platforms. 

โ€œWe need to treat these companies with the same expectations that we have for everyone else functioning in society,โ€ he added. 

The CAAD report recommends transparency, safety, and accountability for AI. It also calls for  regulators to ensure AI companies report on energy use and emissions, and safeguard them against discrimination, bias and disinformation. The report also says companies need to enforce community guidelines and monetization policies, and that governments should develop and enforce safety standards, ensuring companies and CEOs are liable for any harm to people and the environment as a result of generative AI.

According to Cook, a good way to begin addressing the issue of AI-generated climate misinformation and disinformation is to demonetize  it.  

โ€œI think that demonetization is the best tool, and my observation of social media platforms . . . is that they respond when they encounter sufficient outside pressure,โ€ he said. โ€œIf there is pressure for them to not fund or [accept] misinformation advertisers, then they can be persuaded to do it, but only if they receive sufficient pressure.โ€ Cook thinks demonetization and having journalists report on climate disinformation and shining a light on it is one of the best tools to stop it from happening. 

Galaz echoed this idea. โ€œSelf-regulation has failed us. The way weโ€™re trying to solve it now is just not working. There needs to be [regulation] and I think [another] part is going to be the educational aspect of it, by journalists, decision makers and others.โ€  

Stella Levantesi
Stella Levantesi is an Italian climate journalist, photographer, and author.ย She is the author of the Gaslit series on DeSmog. Her main areas of expertise are climate disinformation, climate litigation, and corporate responsibility on the climate crisis.ย Her book โ€œI bugiardi del climaโ€ (Climate Liars) was published in Italy with Laterza, and her work has featured in The New Republic and Nature Italy. You can follow her on Twitter @StellaLevantesi.

Related Posts

Analysis
on

Right wing YouTuber Tim Pool is the latest to own โ€˜climate peopleโ€™ with fake facts spouted by a grizzled TV oilman.

Right wing YouTuber Tim Pool is the latest to own โ€˜climate peopleโ€™ with fake facts spouted by a grizzled TV oilman.
on

Lord Moynihan of Chelsea, who holds shares in Shell and TotalEnergies, called the green transition a โ€œchildrenโ€™s crusadeโ€.

Lord Moynihan of Chelsea, who holds shares in Shell and TotalEnergies, called the green transition a โ€œchildrenโ€™s crusadeโ€.
Analysis
on

Oil patch advocate Lisa Baiton called for more extraction and less regulation at Vancouver address that didnโ€™t once mention climate change.

Oil patch advocate Lisa Baiton called for more extraction and less regulation at Vancouver address that didnโ€™t once mention climate change.
on

The head of the CO2 Coalition tells DeSmog that Wright agrees carbon dioxide is โ€œnot the demon molecule, itโ€™s the miracle molecule.โ€

The head of the CO2 Coalition tells DeSmog that Wright agrees carbon dioxide is โ€œnot the demon molecule, itโ€™s the miracle molecule.โ€