COP LOOK LISTEN ISSUE 09 | 20 NOV 25

Hello and welcome friends, to the second Thursday of COP30, where uncertainty about the fate of COP30 (and location of COP31?) still hangs in the air, while CAADies and our report are popping up all over the airwaves.

With so much is still uncertain at the moment of writing, we’re taking a quick detour all the way to Brussels, where mobile billboards have been cruising through the streets calling on EU President Ursula von der Leyen to stand up to Big Tech, and over 200,000 citizens have already signed petitions demanding the EU to defend digital rights.

People vs Big Tech also hit the streets to ask folks what they thought about the €150 million Big Tech spends lobbying the EU. And let’s just say the answers to “who should be controlling Europe’s future, Europeans, or Trump’s Big Tech Billionaires?” were not exactly flattering to the billionaires.

Which is why we’re zooming in on one of their pet projects: Grokipedia.

FINDING OF THE DAY

Today’s findings on Wiki vs the AI comes thanks to research from CAAD Intel Unit contributor Alex Stinson, who looked at how Artificial Intelligence remains no match for the real thing, because while AI-generated content can sound trustworthy, it can just as easily be used to spread disinformation, and it is. A lot.

Elon Musk’s X AI company launched Grokipedia on October 27, an attempt at addressing his long term criticism over supposedly “woke” sources on Wikipedia and part of what disinfo expert Renée DiResta described recently as the right-wing attack on Wikipedia.

So far Grokipedia has been found to be not very reliable, to put it mildly. The press is finding tons of real gaps and challenges (including politicization of settled medical science) in its ability to cover topics, as it cites Nazi forums and Infowars. As long-term Wikipedia-beat reporter Stephen Harrison noted: Musk has long held a grudge about Wikipedia coverage of his own biography, and has been attacking it consistently since taking over X—so creating a Wikipedia competitor is hardly a surprise.

While the appeal of an “anti-woke” AI encyclopedia is likely limited, Grokipedia can serve as an illustration of how AI-generated content is increasingly eroding an already-besieged digital information environment. The launch of Grokipedia is one example of AI generated climate disinformation used to exploit algorithmic news recommendations and manipulate the public and policymakers with unreliable, unaccountable and ultimately harmful false content

Some of these are (relatively) innocuous (if polluting) websites churning out spam for some limited financial purposes. For example, in searching for topics related to sustainability, we found “Sustainability-Directory,” a website optimized for covering sustainable business topics with low-quality output from large language models now commonly known as AI slop.

It’s useful to contrast these kinds of slop factories, masquerading as general knowledge resources, with Wikipedia. The sharp contrast between the websites with clear editorial policies like Wikipedia, and relationships to sources, and the new AI driven content—highlights just how easy it is to create content without clear editorial accountability or information integrity.

Following the release of Grokipedia, we evaluated some of the core climate content articles on Grokipedia, and compared it to other AI slop websites and pre-AI user generated content written from a more science-based consensus on Wikipedia.

The Wikipedia approach to information integrity on climate change

In the early 2010s English Wikipedia had a wave of conflicts between climate deniers and editors that ultimately led to many earlier contributors to Climate content on English Wikipedia being banned from editing on the topic area.

However, increasing awareness about climate gaps on Wikipedia in the 2018/2019 window as a result of the bannings led to a revival of WikiProject Climate Change. Through this revival, the Wikipedia community has been actively reviewing the coverage of climate on Wikipedia, systematically improving articles and core climate science topics to be consistent with the UN’s IPCC reports AR5 and AR6.

It’s easy to see the substantial change in the core article about climate change between January 2019 to Nov 2025, where citations have nearly doubled, with a focus on increasing coverage of scientific sources.

A graph showing the history of edits to the “Climate Change” page on English Wikipedia.

Notice how a large number of edits were made from 2005 to 2011, with a significant decrease in the 2010s after a group of editors were banned for editing about the “controversy” of climate change. The climate change editing community approached the topic in 2019, updating the article to a “Featured Article” one of the highest quality articles on English Wikipedia.

Moreover, that same community has been reviewing non-English Wikipedias to make sure that common misrepresentations of climate science are not translated into other languages. In 2022, the community found that only about 50 of the 143 articles about climate change had some outdated science or misinformation, most of which they were able to correct or remove. Across language Wikipedias there is a strong editorial policy that strongly prefers academic literature reviews (see the English Version here), so there is a shared framework for cross-language and cultural collaboration. With this new awareness, communities in other languages, such as SpanishArabic and German have been able to make similar updates.

Wikipedia’s clear editorial practices, and the ability to update content is transparent and easy to examine, provides a good example of how community governance can work in creating clear climate communication.

Gen-AI used to build trust by feeling familiar

One of the challenges when searching about climate or environmental action concepts online is the decades of greenwashing, and the broad network of interests trying to greenwash online knowledge in their favor.

Since the widespread adoption of AI, slop websites are popping up in a number of domains. For example, we found that Gemini and ChatGPT were frequently referring to the Sustainability Directory in response to search queries for sustainable fashion and business topics. The website contains  over 85000 long-form text summaries of topics written by AI and appears to be built by two consultants based in China.

The Sustainability Directory seems to be trying to demonstrate its value to paying customers who want to work in sustainability business niches, establishing terms that then get picked up by AI search. For example when searching for terms like “Fashion Ecology” with Google’s AI search, the AI model response is almost entirely based on citations to the Sustainability Directory.

Unfortunately, with threats to access to data from some of the reputable sources (such as disappearance of key information from high trust cites like the EPA and NASA), this kind of “insertion” of narratives could rebalance the body of evidence being discovered by large language models and search results.

Grokipedia is deliberately selling doubt

Unlike other slop websites, where the goal appears to be to occupy either algorithmic or AI training based attention, Musks’ Grokipedia is a deliberate attempt to take the place of more reliable and transparently edited sources like Wikipedia or more traditional encyclopedias or academic publications with clear editorial processes and information integrity safeguards.

At first glance, most of Grokipedia’s articles about scientific topics like climate change appear blandly credible, like most gen-AI content. Grok appears programmed to selectively use reliable institutional sources to set up conditions for criticism to seem valid, as in the Climate Change article, NASA, the IPCC, and NOAA are frequently referenced. But these citations are a trojan horse, establishing a baseline of credibility in a neutral voice and using that initial trust to follow with resurrections of “zombie” arguments of climate denial.

The section titled “Controversies and Alternative Views” is full of very explicit misdirections, for example, attacking the 97% consensus study, a common misdirection of deniers from the overwhelming consensus among climate experts on the reality of anthropogenic warming. Framing this as a major, unresolved controversy is a gross misrepresentation designed to create the illusion of a divided scientific community, a longstanding goal of their disinformation campaigns.

Moreover, unlike Wikipedia, where the editorial strategy of the editing community prefers literature reviews by scientists over less editorially controlled sources (nearly 200 references to the IPCC in the Wikipedia article), the Grokipedia article subtly works in deliberately contrary sources throughout the article. For example, well identified climate contrarian now employed by a Big Oil+Big Tobacco funded think tank, Roger Pielke’s substack is used as a critical source to downplay the severity of climate disasters as identified in peer reviewed literature.

This dynamic is used for other entries, for example, on the Sustainable Development Goals. The first 2 paragraphs seem like a reasonable overview of the challenges faced by large scale global collaboration, citing only UN Sources. But the third paragraph introduces the “Controversy” about the implementation of the SDGs, where a full half of the citations are a listicle from The Africa Report and then from the Heritage Foundation, a Big Oil/Business-backed conservative lobby group behind Project 2025, recently in the news for its President’s support of Tucker Carlson’s interview with a Holocaust denier, part of a larger culture of open racism and misogyny at Heritage, according to a whistleblower.

Placing editorial weight on a handful of easy to read critical voices sets up a rhetoric of doubt early in the article that keeps returning throughout. By comparison, the English Wikipedia article focuses on academic reviews of criticisms, and systematically examines the criticisms from a wide range of sources.

When it comes to climate solutions, there is a real inconsistency in quality and depth of coverage. For example, the Battery storage power station article says it’s adapted from Wikipedia at the bottom of the article but has a fraction of the content (23 sources vs the 125 used on English Wikipedia). Almost all the sources are from financial reporting websites, and two of the sources highlight Tesla’s leadership in the sector. Worse, in company and technology pages related to Musk owned businesses there is an undue weight towards business-friendly information: the Tesla article overwhelmingly references press releases from the Tesla website (and has twice as many citations as the Climate Change article.)

The lack of transparency on how business or corporate information gets written about on the Grokipedia highlights just how hard it is to trust any AI generated content as it lacks any editorial standards.

Climate Information integrity for “informational” websites?

As the UN system advances the Global Initiative Information Integrity on Climate Change, it should consider the overall responsibility of “informative” website providers to actually provide accurate, and evidence-based content. Generative AI clearly can create more content than would be sustainable through human written processes, and is easily amplified by algorithms, as shown by Sustainability Directory.

Human-in-the-loop projects, though slower at producing content, creates greater clarity on how to improve the content and allows for emergence of norms and practices that serve to safeguard information integrity. Humans, once they see a problem (like the climate disinformation found on Non-English Wikipedias), could systematically work through that content to make it more accurate, and find reasonable moderate views that show the breadth of human opinion about a topic.

Moreover, as we get deeper into AI created content flooding the internet, there is a need for deliberate social and policy measures that set expectations for accountability: how do you make sure the most reliable sources are used by the models? How do you help keep the companies and wealthy interests from inserting ungrounded or manipulated claims into model logic?

But all that presumes a positive answer to a much more fundamental question: If AI is just reproducing existing information, but worse, with less accuracy, why is it worth the immense energy costs and pollution? Or is it just the next MyspaceQuantumNFTMetaverse, tech fad, and the best thing you can do with AI for the climate is turn it off?

GOOD TO KNOW

LISTEN TO THE EXPERTS

If you have any investigative leads CAAD should explore, or want to find out more about our research and intel during the summit, please email [email protected]. We also have members on the ground in Belém who are available for interviews and side-events.