The AI Threats to Climate Change

Climate Action Against Disinformation's report highlights the risks that artificial intelligence poses to the climate crisis.

This new report from the Climate Action Against Disinformation (CAAD) explores the risks that AI poses to the climate crisis.

“AI companies spread hype that they might save the planet, but currently they are doing just the opposite,” said Michael Khoo, Climate Disinformation Program Director at Friends of the Earth. “AI companies risk turbocharging climate disinformation, and their energy use is causing a dangerous increase to overall US consumption, with a corresponding increase of carbon emissions.”

“We are already seeing how generative AI is being weaponized to spin up climate disinformation or copy legitimate news sites to siphon off advertising revenue”, said Sarah Kay Wiley, Director of Policy at Check My Ads, “Adtech companies are woefully unprepared to deal with Generative AI and the opaque nature of the digital advertising industry means advertisers are not in control of where their ad dollars are going. Regulation is needed to help build transparency and accountability to ensure advertisers are able to decide whether to support AI generated content.”

“The evidence is clear: the production of AI is having a negative impact on the climate. The responsibility to address those impacts lie with the companies producing and releasing AI at a breakneck speed,” said Nicole Sugerman, Campaign Manager at Kairos Fellowship. “We must not allow another ‘move fast and break things’ era in tech; we’ve already seen how the rapid, unregulated growth of social media platforms led to previously unimaginable levels of online and offline harm and violence. We can get it right this time, with regulation of AI companies that can protect our futures and the future of the planet.”

“The climate emergency cannot be confronted while online public & political discourse is polluted by fear, hate, confusion and conspiracy,” said Oliver Hayes, Head of Policy & Campaigns at Global Action Plan. “AI is supercharging these problems, making misinformation cheaper and easier to produce and share than ever before. In a year when 2 billion people are heading to the polls, this represents an existential threat to climate action. We should stop looking at AI through the “benefit-only” analysis and recognise that, in order to secure robust democracies and equitable climate policy, we must rein in big tech and regulate AI.”

“The skyrocketing use of electricity and water, combined with its ability to rapidly spread disinformation, makes AI one of the greatest emerging climate threat-multipliers, said Charlie Cray, Senior Strategist at Greenpeace USA, “Governments and companies must stop pretending that increasing equipment efficiencies and directing AI tools towards weather disaster responses are enough to mitigate AI’s contribution to the climate emergency.”

Key Findings:
  • AI systems are increasingly demanding vast amounts of energy and water. On an industry-wide level, the International Energy Agency estimates the energy use from data centers that power AI will double in just the next two years, consuming as much energy as Japan. These data centers and AI systems also use large amounts of water in operations and are often located in areas that already face water shortages.
  • AI will help spread climate disinformation. The World Economic Forum in 2024 identified AI-generated mis- and disinformation as the world’s greatest threat (followed by climate change), saying “large-scale AI models have already enabled an explosion in falsified information.” AI will allow climate deniers to more easily, cheaply and rapidly develop persuasive false content and spread it across social media, targeted advertising and search engines.

Based on the findings, CAAD has three priority recommendations for tech companies and regulators to adopt and implement, including:

  • Transparency: Companies must report on energy use and emissions produced, assess environmental and social justice implications of developing their technologies and explain how their AI models produce information.
  • Safety: Companies must demonstrate that their products are safe for people and the environment and explain how their algorithms are safeguarded against discrimination, bias and disinformation.  Governments must develop common standards on AI safety reporting and work with the International Panel on Climate Change to develop coordinated global oversight.
  • Accountability: Governments should enforce rules on investigating and mitigating the climate impacts of AI with clear, strong penalties for noncompliance. Companies and their executives must be held accountable for any harms that occur as a result of their products.

You can read the report in full here.