โSo, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it.โ
This is what โBardโ told researchers in 2023. Bard by Google is a generative artificial intelligence chatbot that can produce human-sounding text and other content in response to prompts or questions posed by users.
But if AI can now produce new content and information, can it also produce misinformation? Experts have found evidence.
In a study by the Center for Countering Digital Hate, researchers tested Bard on 100 false narratives on nine themes, including climate and vaccines, and found that the tool generated misinformation on 78 out of the 100 narratives tested. According to the researchers, Bard generated misinformation on all 10 narratives about climate change.
In 2023, another team of researchers at Newsguard, a platform providing tools to counter misinformation, tested OpenAIโs Chat GPT-3.5 and 4, which can also produce text, articles, and more. According to the research, ChatGPT-3.5 generated misinformation and hoaxes 80 percent of the time when prompted to do so with 100 false narratives, while ChatGPT-4 advanced all 100 false narratives in a more detailed and convincing manner. NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, and created responses in the form of news articles, Twitter threads, and even TV scripts imitating specific political ideologies or conspiracy theorists.
โI think this is important and worrying, the production of fake science, the automation in this domain, and how easily that becomes integrated into search tools like Google Scholar or similar ones,โ said Victor Galaz, deputy director and associate professor in political science at the Stockholm Resilience Centre at Stockholm University in Sweden. โBecause then thatโs a slow process of eroding the very basics of any kind of conversation.โ
In another recent study published this month, researchers found GPT-fabricated content in Google Scholar mimicking legitimate scientific papers on issues including the environment, health, and computing. The researchers warn of โevidence hacking,โ the โstrategic and coordinated malicious manipulation of societyโs evidence base,โ which Google Scholar can be susceptible to.
So, we know that AI can generate misinformation but to what extent is this an issue?
Letโs start with the basics.
Drilling Down on AI and the Environment
Letโs take ChatGPT, for example. ChatGPT is a Large Language Model or LLM.
LLMs are among the AI technologies that are most relevant to issues of misinformation and climate misinformation, according to Asheley R. Landrum, an associate professor at the Walter Cronkite School of Journalism and Mass Communication and a senior global futures scientist at Arizona State University.
Because LLMs can create text that appears to be human generated, which can be used to create misinformation quickly and at a low cost, malicious actors can โexploitโ LLMs to create disinformation with a single prompt entered by a user, said Landrum in an email to DeSmog.
Together with LLMs, synthetic media, social bots, and algorithms are also AI technologies that are relevant in the context of all types of misinformation, including on climate.
โSynthetic media,โ which includes the so-called โdeep fakes,โ is content that is produced or modified using AI.
โOn one hand, we can be concerned that people will believe that synthetic media is real. For example, when a robocall mimicking Joe Bidenโs voice told people not to vote in the Democratic primary in New Hampshire,โ Landrum wrote in her email. โAnother concern, and one I find more problematic, is that the mere existence of deep fakes allows public figures and their audiences to dismiss real information as fake.โ
Synthetic media also includes photos. In March 2023, the Texas Public Policy Foundation, a conservative think tank that advances climate change denial narratives, used AI to create an image of a dead whale and wind turbines, and weaponized it to promote disinformation on renewable energy.
Social bots, another technology that can spread misinformation, use AI to create messages that appear to be written by people and work autonomously on social media platforms like X.
โSocial bots actively amplify misinformation early on before a post officially โgoes viral.โ And they target influential users with replies and mentions,โ Landrum explained. โFurthermore, they can engage in elaborate conversations with humans, employing personalized messages aiming to alter opinion.โ
Last but not least, algorithms. These filter audiencesโ media and information feeds based on what is expected to be the most relevant to a user. Algorithms use AI to curate highly personalized content for users based on behavior, demographics, preferences, etc.
โThis means that the misinformation that you are being exposed to is misinformation that will likely resonate with you,โ Landrum said. โIn fact, researchers have suggested that AI is being used to emotionally profile audiences to optimize content for political gain.โ
Research shows that AI can easily create targeted, effective information. For example, a study published in January found that political ads tailored to individualsโ personalities are more persuasive than non-personalized ads. The study says that these can be automatically generated on a large scale, highlighting the risks of using AI and โmicrotargetingโ to craft political messages that resonate with individuals based on their personality traits.
So, once misinformation or disinformation (deliberate and intentional) content exists, it can be spread through โthe prioritization of inflammatory content that algorithms reward,โ as well as bad actors, according to a report on the threats of AI to climate published in March by the Climate Action Against Disinformation (CAAD) network.
โMany now are โฆ questioning AIโs environmental impact,โ Michael Khoo, climate disinformation program director at Friends of the Earth and lead co-author of the CAAD report, told DeSmog. The report also states that AI will require massive amounts of energy and water: On an industry-wide level, the International Energy Agency estimates the energy use for electricity consumption from global data centers that power AI will double in the next two years, consuming as much energy as Japan. These data centers and AI systems also use large amounts of water for their operations and are often located in areas that already face water shortages, the report says.
Khoo said the biggest danger overall from AI is that itโs going to โweaken the information environment and be used to create disinformation which then can be spread on social media.โ
Some experts share this view, while others are more cautious to denounce the connection between AI and climate misinformation, based on the fact that it is still unknown if and how this is affecting the public.
A โGame-Changerโ for Misinformation
โAI could be a major game changer in terms of the production of climate misinformation,โ Galaz told DeSmog. All the costly aspects of AI, like producing messages that are targeting a specific type of audience through political predisposition or psychological profiling, and creating very convincing material, not only text, but also images and videos, โcan now be produced at a very low cost.โ
Itโs not just about cost. Itโs also about volume.
โI think volume in this context matters, it makes your message easier to get picked up by someone else,โ Galaz said. โSuddenly we have a massive challenge ahead of us dealing with volumes of misinformation flooding social media and a level of sophistication that [makes] it very difficult for people to see,โ he added.
Galazโs work, together with researchers Stefan Daume and Arvid Marklund at the Stockholm Resilience Centre, also points to three other main characteristics of AIโs capacity to produce information and misinformation: accessibility, sophistication, and persuasion.
โAs we see these technologies evolve, they become more and more accessible. That accessibility makes it easier to produce a mass volume of information,โ Galaz said. โThe sophistication [means] itโs difficult for a user to see whether something is generated by AI compared to a human. And [persuasion], prompts these models to produce something that is very specific to an audience.โ
โThese three in combination to me are warning flags that we might be facing something very difficult in the future.โ
According to Landrum, AI undoubtedly increases the quantity and amplification of misinformation, but this may not necessarily influence public opinion.
AI-produced and AI-spread climate misinformation may also be more damaging and get picked up more when climate issues are at the center of the international public debate. This is not surprising considering it has been a well-known pattern for climate change denial, disinformation, and obstruction in recent decades.
โThere is not yet a lot of evidence that suggests people will be influenced by [AI misinformation]. This is true whether the misinformation is about climate change or not,โ Landrum said. โIt seems likely to me that climate dis/misinformation will be less prevalent than other types of political dis/misinformation until there is a specific event that will likely bring climate change to the forefront of peopleโs attention, for example, a summit or a papal encyclical.โ
Galaz echoed this, underscoring that thereโs still only experimental evidence of AI misinformation leading to impacts on climate opinion, but also reiterated that the context and the capacities of these models at the moment are a worry.
Volume, accessibility, sophistication, and persuasion all interact with another aspect of AI: the speed at which it is developing.
โScientists are trying to catch up with technological changes that are much more rapid than our methods are able to assess. The world is changing more rapidly than weโre able to study it,โ said Galaz. โPart of that is also getting access to data to see whatโs happening and how itโs happening and that has become more difficult lately on platforms like X [since] Elon Musk.โ
AI Tools for Climate: Solutionsย
Scientists and tech companies are working on AI-based methods for combating misinformation, but Landrum says they arenโt โthereโ yet.
Itโs possible, for example, that AI chatbots/social bots could be used to provide accurate information. But the same principles of motivated reasoning that influence whether people are affected by fact checks are likely to affect whether people will engage with such chatbots; that is, if they are motivated to reject the information โ to protect their identity or existing worldviews โ they will find reasons to reject it, Landrum explained.
Some researchers are trying to develop machine learning tools to recognize and debunk climate misinformation. John Cook, a senior research fellow at the Melbourne Centre for Behaviour Change at the University of Melbourne, started working on this before generative AI even existed.
โHow do you generate an automatic debunking once youโve detected misinformation? Once generative AI exploded, it really opened up the opportunity for us to complete our task of automatic debunking,โ Cook told DeSmog. โSo thatโs what weโve been working on for about a year and a half now โ detecting misinformation and then using generative AI to actually construct the debunking [that] fits the best practices from the psychology research.โ
The AI model being developed by Cook and his colleagues is called CARDS. It operates following a structure of โfact-myth-fallacy-fact debunking,โ which means, first, identify the key fact that replaces the myth. Second, identify the fallacy that the myth commits. Third, explain how the fallacy misleads and distorts the facts. And finally, โwrapping it all together,โ said Cook. โThis is a structure we recommend in the debunking handbook, and none of this would be possible without generative AI,โ he added.
But there are challenges with developing this tool, including the fact that LLMs can sometimes, as Cook said, โhallucinate.โ
He said that to resolve this issue, his team put a lot of โscaffoldingโ around the AI prompts, which means adding tools or outside input to make it more reliable. He developed a model called FLICC โ based on five techniques to combat climate denial โ โso that we could detect the fallacies independently and then use that to inform the AI prompts,โ Cook explained. Having to add a lot of tools counteracts the problem of AI just generating misinformation or hallucinating, he said. โSo to obtain the facts in our debunkings, weโre also pulling from a wide list of factful, reliable websites. Thatโs one of the flexibilities you have with generative AI, you can [reference] reliable sources if you have to.โ
The applications for this AI tool range from a chatbot or social bot to an app, a semi-automated semi human interactive tool, or even a webpage and newsletter.
Some of the AI tools also come with their own issues. โUltimately what weโre going to do as we are developing the model is do some stakeholder engagement, talk to journalists, fact checkers, educators, scientists, climate NGOs, anyone who might potentially use this kind of tool and talk to them about how they might find it useful,โ Cook said.
According to Galaz, one of AIโs strengths is analyzing and understanding patterns in massive amounts of data, which can help people, if developed responsibly. For example, combining AI with local knowledge about agriculture can help farmers in the wake of climate alterations, including soil depletion.
This can only work if the AI industry is held accountable, experts say. Cook worries that regulation is crucial and that it is difficult to get in place.
โThe technology is moving so quickly that even if you are going to try to get government regulation, governments are typically slow moving in the best of conditions,โ Cook points out. โWhen itโs something this fast, theyโre really going to struggle to keep up. [Even scientists] are struggling to keep up because the sands are shifting underneath our feet as the research, the models, and the technology are changing as we are working on it,โ he added.
Regulating AIย
Scholars mostly agree that AI needs to be regulated.
โAI is always spoken about in this very lighthearted, breathy terms of how itโs going to save the planet,โ said Khoo. โBut right now [AI companies] are avoiding accountability, transparency and safety standards that we wanted in social media tech policy around climate.โ
Both in the CAAD report and the interview with DeSmog, Khoo warned about the need to avoid repeating the mistakes of policymakers who didnโt regulate social media platforms.
โWe need to treat these companies with the same expectations that we have for everyone else functioning in society,โ he added.
The CAAD report recommends transparency, safety, and accountability for AI. It also calls for regulators to ensure AI companies report on energy use and emissions, and safeguard them against discrimination, bias and disinformation. The report also says companies need to enforce community guidelines and monetization policies, and that governments should develop and enforce safety standards, ensuring companies and CEOs are liable for any harm to people and the environment as a result of generative AI.
According to Cook, a good way to begin addressing the issue of AI-generated climate misinformation and disinformation is to demonetize it.
โI think that demonetization is the best tool, and my observation of social media platforms . . . is that they respond when they encounter sufficient outside pressure,โ he said. โIf there is pressure for them to not fund or [accept] misinformation advertisers, then they can be persuaded to do it, but only if they receive sufficient pressure.โ Cook thinks demonetization and having journalists report on climate disinformation and shining a light on it is one of the best tools to stop it from happening.
Galaz echoed this idea. โSelf-regulation has failed us. The way weโre trying to solve it now is just not working. There needs to be [regulation] and I think [another] part is going to be the educational aspect of it, by journalists, decision makers and others.โ
Subscribe to our newsletter
Stay up to date with DeSmog news and alerts