Simon Willison’s Weblog: We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Source URL: https://simonwillison.net/2025/May/20/ai-energy-footprint/#atom-everything
Source: Simon Willison’s Weblog
Title: We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Feedly Summary: We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
James O’Donnell and Casey Crownhart try to pull together a detailed account of AI energy usage for MIT Technology Review.
They quickly run into the same roadblock faced by everyone else who’s tried to investigate this: the AI companies themselves remain infuriatingly opaque about their energy usage, making it impossible to produce credible, definitive numbers on any of this.
Something I find frustrating about conversations about AI energy usage is the way anything that could remotely be categorized as “AI" (a vague term at the best of the times) inevitably gets bundled together. Here’s a good example from early in this piece:

In 2017, AI began to change everything. Data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023.

ChatGPT kicked off the generative AI boom in November 2022, so that six year period mostly represents growth in data centers in the pre-generative AI era.
Thanks to the lack of transparency on energy usage by the popular closed models – OpenAI, Anthropic and Gemini all refused to share useful numbers with the reporters – they turned to the Llama models to get estimates of energy usage instead. The estimated prompts like this:

Llama 3.1 8B – 114 joules per response – run a microwave for one-tenth of a second.
Llama 3.1 405B – 6,706 joules per response – run the microwave for eight seconds.
A 1024 x 1024 pixels image with Stable Diffusion 3 Medium – 2,282 joules per image which I’d estimate at about two and a half seconds.

Video models use a lot more energy. Experiments with CogVideoX (presumably this one) used "700 times the energy required to generate a high-quality image" for a 5 second video.

AI companies have defended these numbers saying that generative video has a smaller footprint than the film shoots and travel that go into typical video production. That claim is hard to test and doesn’t account for the surge in video generation that might follow if AI videos become cheap to produce.

I share their skepticism here. I don’t think comparing a 5 second AI generated video to a full film production is a credible comparison here.
This piece generally reinforced my mental model that the cost of (most) individual prompts by individuals is fractionally small, but that the overall costs still add up to something substantial.
The lack of detailed information around this stuff is so disappointing – especially from companies like Google who have aggressive sustainability targets.
Tags: ai-energy-usage, llms, ai, generative-ai, ai-ethics

AI Summary and Description: Yes

Summary: The article delves into the often-overlooked energy consumption associated with AI technologies, particularly focusing on the opacity of AI companies regarding their energy usage metrics. It highlights significant energy demands of generative AI models, like those from Llama, and contextualizes the implications of this usage amidst a growing AI industry, raising important questions for professionals involved in AI security, infrastructure, and sustainability.

Detailed Description: The discussion presented in the text provides critical insights into the energy dynamics of AI development and deployment. Key points include:

– **Opaque Reporting**: AI companies are notably secretive about their energy consumption data, which impairs any meaningful analysis or comparison regarding their environmental impact.
– **Growth in Data Centers**: Since 2017, the construction of data centers optimized for AI applications has led to an doubling of their electricity consumption by 2023. This is a direct result of increasing data processing demands associated with AI functionalities.
– **Energy Estimates from Llama Models**: Where direct metrics from major AI companies were unavailable, estimates from alternative models (e.g., Llama) have been used:
– Llama 3.1 8B requires 114 joules per prompt (equivalent to running a microwave for 0.1 seconds).
– Llama 3.1 405B uses 6,706 joules per prompt (about 8 seconds of microwave use).
– Generating a high-quality image with Stable Diffusion requires approximately 2,282 joules.
– **Comparative Energy Usage**: Video generation processes consume significantly more energy; for instance, producing a 5-second video could require energy equivalent to 700 high-quality images.
– **Defensiveness of AI Companies**: Companies argue that generative video models may have a smaller environmental footprint than traditional film production, though this assertion is difficult to validate and raises skepticism regarding true sustainability.
– **Cost Implications**: The article emphasizes that while the cost per individual AI prompt may appear negligible, the cumulative energy demands are substantial, raising concerns about the overall environmental impact of AI technologies.
– **Sustainability Goals**: The lack of transparency and the overwhelming complexity of assessing energy usage from these companies poses challenges for evaluating their sustainability commitments, particularly for firms like Google that have publicly stated aggressive sustainability targets.

In light of these findings, professionals in cloud computing, AI, and infrastructure security should consider the ramifications of energy consumption in AI development not only for ethical reasoning but also for compliance with emerging regulations concerning environmental impacts.