The CAAD coalition has written to Senator Chuck Schumer, calling for policies that tackle the environmental and disinformation ramifications of AI in relation to climate change.
One of the primary demands in the letter is the introduction of mandatory disclosure requirements for companies developing energy-intensive AI models. These disclosures would shed light on the environmental impact of AI models, emphasising the need for transparency in this regard. Moreover, the coalition suggests incorporating measures to counter the spread of climate disinformation fueled by AI into legislative efforts.
The concerns raised are centered around the usage of large language models (LLMs), such as ChatGPT. These LLMs consume significant amounts of energy during their training, which, in turn, contributes to carbon emissions. Furthermore, the letter stresses the importance of companies providing comprehensive reports on the environmental impact of their AI models throughout their entire life cycle, including the energy consumption associated with user queries and the materials used in AI hardware.
Beyond environmental concerns, the letter addresses the alarming potential of AI to amplify disinformation about climate change. Studies have indicated that people tend to trust AI-generated content more than human-generated content, particularly on topics like climate change. In light of this, the letter suggests holding companies and their executives accountable for the environmental and societal harms that may result from the use of generative AI while simultaneously emphasising the need to protect free expression and human rights.