For 20 million users? Great, you just proved that AI energy impact is not big.
Like a fraction of 20 million users over-using AC like it happens, for instance, in the US. Or flying for work continuously without a good reason, also super common, and incredibly more wasteful (~30h flight time of a 737 is like the energy needed for a training, not inference, of a model with 100s of billions parameters).
The point about AI and energy does not make sense, it exists only because there are people that are worried about AI, and need to find something to show it creates damage.
You can install Flux in your fucking laptop and generate images there, and you'll see that to make it as hot (or discharge as quickly) as when you play any modern videogame, it takes a lot of efforts and generated images. And you are running on a battery...
Want to talk about all these stupid Javascript frameworks, that make loading a trivial page so wasteful, energy-wise? Almost everything is worse than AI, basically. The fact that this post is upvoted here on HN, where once there was more critical thinking, is testament on how bad nowadays we are doing.
To add some extra numbers here just to showcase how little energy usage this is.
This means it's adding about 0.012% additional energy consumption to those users energy consumption.
From another angle: Average US house energy consumption is around 30kWh per day. 0.012% of that is 3.75 watt hours of energy per day. This is the equivalent amount of energy as streaming HD video to your iPhone on a 4G network for 1.5 seconds. [0]
So in other words, a 15s youtube ad you are forced to watch on your phone before watching the video you were going to watch anyway takes an order of magnitude more energy than the average AI user according to this article.
Help me understand. The article claims 20MM users with an average of 24 prompts per day. Do we have any information on the average number of prompts per user? I suspect it would follow a Pareto distribution, with a small subset consuming much much more, but no clue as to where that average assumption holds.
Edit: a quick search points to 5-10 prompts per session but we’d still need to know the average number of sessions
In December I did 61 images, and in November I did 121, according to Replicate's billing. Probably more like 10-20 images per session, which for me are usually 1-2 times per week?
Is it not still better to just avoid using all this energy though? Do we really need to have these LLM tools everywhere?
And more importantly, when do we stop justifying the use of one finite resource by pointing to all the other ways we could reduce use but don't? Just because people waste a bunch of energy, water, wood, etc doesn't justify using it however you want, at least in my opinion.
We don't "need" it any more than we need endless streams of cat videos and fake personalities on tiktok. Some people get more utility out of one over the other.
Some would argue that we would all be better off without those other things and keeping AI instead.
That subjective interpretation of marginal utility and where valuable becomes wasteful for any particular action is both the reason and the answer for your questions.
To me this line of questioning is perplexing, because the history of civilization and technological development is essentially a story of applying more energyin place of human work. We burned wood to cook food and unlock more calories from it. We replaced wagons and horses with cars, trucks and gasoline. We harnessed the sun and the atom and sent probes throughout our solar system to do research.
We have always required more energy to advance civilization. Sure we should improve efficiency and sustainability, if we don't we may cause a calamity which could take out many of us as well as lot of other critters (though on a geological scale, it'll basically be meaningless). But if we start thinking that using more energy is inherently a bad thing that needs to oppose, civilization will regress.
The underlying problem, as I see it, that underlies these questions is whether we will ever learn what "enough" really means for ourselves.
At some point people have to decide for themselves when they have enough and when they can just enjoy life as much as possible rather than chasing the next thing.
I'm not proposing that its a clear line, or that the line is the same for everyone, or that the line should be decided by someone else and forced on you.
My point is simply that we're screwed as a species if we continue to just want more and more of everything, especially convenience, regardless of the cost.
May come down to philosophical differences - but I'd prefer humanity to continue progressing, curing diseases, learning about the Universe, improving quality of life, etc. as far as we can, as opposed to deciding at some point (or gradually) that we've had enough. Threshold for new advancements should not be whether we absolutely need it, but whether it does more good than harm - which I think is easily the case for LLMs and other machine learning models.
Honest question here, hopefully it comes across that way but you never know with text.
I've never quite understood "progress" as a goal. The word is directional, it needs a goalpost to move towards and I often feel like its used more to mean "just keep changing stuff."
How do you define that goal post, and if we got there would you be happy to stop there?
For me that's the challenge. I think I said it elsewhere in this thread, but people rarely know what "enough" means for them. To me that seems like and endless loop of changing stuff and consuming all along the way because we've zoomed in on the approach of always wanting things to be better rather than setting an expectation of what is good enough.
> The word is directional, it needs a goalpost to move towards [...] How do you define that goal post, and if we got there would you be happy to stop there?
I'd claim you can in theory have a direction without an end-point - where even if some measure is at 9^99 there's always still 9^99 + 1.
At the very least, you can definitely have a direction without setting an artificial end-point. May be that in reducing malaria cases/infant mortality rates we hit 0 and thus literally cannot go lower, and so hit a goalpost in that sense, but if we're at 500 malaria cases and can reduce it further (without equal/greater degradations in other areas) then I don't see why we should ever be "happy to stop there".
> To me that seems like and endless loop of changing stuff
Might agree about, say, website redesigns or fast fashion, but I consider humanity's position as a whole (such as standard of living) to be on a significant upwards trend.
> Might agree about, say, website redesigns or fast fashion, but I consider humanity's position as a whole (such as standard of living) to be on a significant upwards trend.
Sure, and I'm not trying to say we haven't made significant impacts with technological advancements. We do still gave to define how we'd even measure humanity's position though, without that we don't have direction and can't know if we're getting closure or further away.
Trends require a similar context. On a 30 year scale I'd say we may have moved upwards (though significantly is a stretch). On 100 years we have almost certainly made significamt movement upwards on some metrics but others would have a significant trend downward. On a 1,000 year scale I think it'd be harder to find areas where we haven't trended upward significantly (I'm sure there are some, but I'd be surprised if there are many). That still leaves the question though, do we always need to trend upward on a never ending treadmill of change and increased consumption?
> We do still gave to define how we'd even measure humanity's position though, without that we don't have direction and can't know if we're getting closure or further away.
What people fundamentally value will vary by their philosophy (increasing pleasure and decreasing suffering, searching for higher meaning in the Universe, etc.) but I'd claim the intermediary objectives tend to have a lot of overlap, with a shared set coalescing around (roughly) increasing humanity's knowledge and potential/capacity to act. For instance, gaining capability to deflect meteors puts us in a better position because getting wiped out by one would render accomplishing effectively anything else impossible - making it an intermediary objective common to pretty much all frameworks.
> On a 30 year scale I'd say we may have moved upwards (though significantly is a stretch)
As compared to 1995, the average life expectancy has increased from to 64.9 to 73.5 and the portion of population living in extrmee poverty has decreased from 32.5% to 8.5% - seems significant to me, even ignoring scientific/technological advancements beyond that. True that averaging over a longer time period smooths out the bumps.
> do we always need to trend upward on a never ending treadmill
To me, "treadmill" implies lack of progress. If we have the option to improve things (e.g: reduce cancer cases) and an option to not, I don't really understand the idea that we should default to the latter, only choosing the former if we "need" to. Only way I can really make sense of that policy is if at some level you actually think it's instead a downwards/net negative trend.
> For instance, gaining capability to deflect meteors puts us in a better position
That would still have to be weighed with the cost of such an ability. What would it take to actually do that? What would it take to get there? And what destructive or dangerous uses are now unlocked along that path? In a vacuum I'd absolutely want to have that ability, in reality its just a lot harder to say whether that goal is worth chasing.
> As compared to 1995, the average life expectancy has increased from to 64.9 to 73.5 and the portion of population living in extrmee poverty has decreased from 32.5% to 8.5%
What's the scope for those stats, are they global of specific to one country?
I'm always torn on life expectancy as a metric. Modern medicine has done a great job of extending lives but I've seen plenty of examples of the extra years being pretty miserable. I'd be very interested to see a life expectancy comparison taking into account quality if life.
> To me, "treadmill" implies lack of progress.
Totally fair, that may not be the best analogy. My point was just that its a goal with no end state.
> If we have the option to improve things (e.g: reduce cancer cases) and an option to not, I don't really understand the idea that we should default to the latter
The point with defaulting to the latter is "don't break a god thing." Considering only the potential upsides misses the potential downsides. We can't cure cancer in a vacuum, there are always costs, and frankly the idea of curing a disease seems more utopian than realistic. Disease is nothing more than a named collection of symptoms, you may be able to drastically reduce any one root cause but that doesn't eradicate the symptoms and most modern treatments are designed only to treat symptoms and ignore root cause.
As always you'd have to determine whether some course of action would have disproportionate negative impact on other objectives, yep. Point there was to set out on what basis I'd decide and weigh those objectives - an orientation for what "progress" is.
> And what destructive or dangerous uses are now unlocked along that path?
I believe unanticipated novel applications skew positive on average, largely on account of most humans having mostly-positive intentions.
> What's the scope for those stats, are they global of specific to one country?
> but I've seen plenty of examples of the extra years being pretty miserable. I'd be very interested to see a life expectancy comparison taking into account quality if life.
You could consider a factor such as Quality-Adjusted Life Expectancy (QALE), which is also on the rise.
> The point with defaulting to the latter is "don't break a god thing."
If "break" means "make overall better", why not?
> Considering only the potential upsides misses the potential downsides.
To be clear, my stance is that you should consider whether it's a net positive/negative:
* "Threshold for new advancements should not be whether we absolutely need it, but whether it does more good than harm"
* "if we're at 500 malaria cases and can reduce it further (without equal/greater degradations in other areas) then I don't see why we should ever be "happy to stop there""
My understanding is that you believe there should be some point at which we just stop even if there are still more beneficial changes to be made (in addition to not making net-negative changes, which we both agree on - though perhaps disagree about what specifically works out to be net-negative).
> We can't cure cancer in a vacuum, there are always costs,
There are costs, but it's very easily possible for the benefits to outweigh them, and more becomes feasible as humanity's knowledge/capabilities increase.
> and frankly the idea of curing a disease seems more utopian than realistic.
There are already a couple of diseases (including smallpox) that we've completely eradicated, and more (like polio) that we've made very good progress on. Cancer is tougher by its nature, but we can certainly at least reduce the number of cases and improve life for those who have it.
> If "break" means "make overall better", why not?
Well that depends heavily on how well we can predict the outcomes and account for downsides and externalized costs. That doesn't mean never change anything, I'd never argue for that, but I am generally more hesitant to than most to change things that are working well enough.
> My understanding is that you believe there should be some point at which we just stop even if there are still more beneficial changes to be made
I think your description here assumes too much clarity in what is possible or what will happen if you continue on. Sometimes the system is well understood and the potential gains are pretty clear, that isn't always the case though.
> There are already a couple of diseases (including smallpox) that we've completely eradicated, and more (like polio) that we've made very good progress on
I wish virology was that cut and dry. There are diseases that present nearly identically to smallpox and polio but were given different names based on hypothesized causes. Monkeypox and acute flaccid myelitis come to mind, respectively. If I remember right, polio is actually one where the clinical diagnosis steps rules it out simply because polio has been deemed irradiated, meaning anyone with polio-like symptoms must have something else because we got rid of polio.
There's a whole rabbit hole around how virology tests and isolates viruses, that's probably not helpful to ho down here. My point stands though, maybe we can eradicate a particular virus but we can't really eradicate a disease when that's largely just a named collection of symptoms.
> I think your description here assumes too much clarity in what is possible or what will happen if you continue on. Sometimes the system is well understood and the potential gains are pretty clear, that isn't always the case though.
Does halting not also come with lack of clarity? For instance, say we're choosing between two options:
1. Continue with disease/vaccine/antibiotic/etc. research
2. Halt said research
To me it seems as though both options come with uncertainties, which could resolve positively as well as negatively. That there is uncertainty or a lack of clarity does not bias me towards option 2. My choice would still be based on analysis of which option has the better expected outcome (which is, to my current knowledge, the former) - even if the best analysis available is a guesstimate with very low correlation (so long as it's a positive correlation).
> There are diseases that present nearly identically to smallpox and polio but were given different names based on hypothesized causes. Monkeypox and acute flaccid myelitis come to mind
Mpox is considerably milder - having killed hundreds of people compared to smallpox's hundreds of millions.
> If I remember right, polio is actually one where the clinical diagnosis steps rules it out simply because polio has been deemed irradiated
Polio isn't eradicated like smallpox is, though for most people it may as well be. Given ongoing eradication efforts my understanding is that there is fairly extensive testing compared to other diseases of its rarity (e.g: https://polioeradication.org/what-we-do/), so it's not just a case of "it's gone because we're ignoring it" if that's what's being implied.
> Does halting not also come with lack of clarity?
That depends on how you choose to view it. Choosing to halt, or to be happy with where you are, is a certainty in that you have what you have but it is uncertain in that you don't know what you could have had if you kept going.
> Mpox is considerably milder - having killed hundreds of people compared to smallpox's hundreds of millions.
The severity is milder for sure, but the list of symptoms is nearly identical. Severity is a tricky one for most pathogens, its very common for severity to decrease over time as a novel pathogen becomes endemic.
> so it's not just a case of "it's gone because we're ignoring it" if that's what's being implied
My implication was "we won't diagnose it because we know it was eradicated," but that may come down to semantics. I'm happy to be wrong there too, I can't find the source I remember seeing for clinical diagnosis so maybe I'm thinking of a different condition.
If you halt disease research, there's a greater chance of a pandemic - or losing the battle to drug-resistant diseases. If you halt meteor deflection programs, there's a greater chance of a significant portion of life being wiped out by a meteor. That doesn't seem like certainty in any sense to me.
> The severity is milder for sure, but the list of symptoms is nearly identical.
You could say that we haven't eliminated all diseases with similar clinical presentations, but we have eliminated smallpox - the disease that was killing hundreds of millions.
> Severity is a tricky one for most pathogens, its very common for severity to decrease over time as a novel pathogen becomes endemic.
Mpox is not just a strain of smallpox that became less virulent over time - they diverged long before smallpox jumped to humans. Smallpox stayed deadly for thousands of years.
> If you halt disease research, there's a greater chance of a pandemic
To clarify, there is greater chance of natural pandemic. The chance of man made pandemic goes down or disappears entirely though. I don't know how to properly weigh the pros and cons there, especially without know likely top secret details of what biological weapons research is ongoing.
> but we have eliminated smallpox
We have a definitional disagreement here. My understanding is that diseases are classified and diagnosed by symptom, not by severity of the symptoms. That would be especially tricky as everyone will have a different reaction to the pathogen or condition and the same underlying cause could present with more or less severity.
If you consider smallpox eliminated because disease with a similar severity is no longer present then we likely have to consider Covid eliminated as well, I don't think many would agree that it is though. People still test positive for CoV-2 and are diagnosed with Covid based on symptoms, it just isn't nearly as deadly or severe for most of the population now.
> Mpox is not just a strain of smallpox that became less virulent over time
Maybe I'm being pedantic here, apologies if that it. My understanding, though, is that monkeypox and smallpox are/were diagnosed by symptom and rarely or never diagnosed by isolating and identifying the virus in patients. Monkeypox and smallpox have very similar symptoms, meaning they could very well be considered the same disease even if they are believed to be caused by two different viruses.
> To clarify, there is greater chance of natural pandemic.
I think it'd be excessively pessimistic (and very selectively so) to believe vaccine/etc. research would increase the overall chance of a pandemic. But, the idea I'm getting at is you seem to be mistakenly treating the choice of halting as not bringing uncertainty other than the relative "you don't know what you could have had" kind. Personally I'd say it's usually the inherently riskier choice than just continuing, on top of having worse expected result, but hopefully you can see that the action of halting does come with at least some uncertainty of its own.
> My understanding is that diseases are classified and diagnosed by symptom, not by severity of the symptoms. That would be especially tricky as everyone will have a different reaction
Symptoms also vary by person. Diagnosis can take into account a variety of factors (medical history, blood tests, etc.), including severity, to determine the most likely identification.
Diseases are classified in documents like ICD-11, with mpox defined as caused by the monkeypox virus ("A disease caused by an infection with monkeypox virus [...] Confirmation is by identification of monkeypox virus") and smallpox as caused by the variola virus ("A disease caused by an infection with variola virus [...] Confirmation is by identification of the variola virus in a skin sample of the rash").
> If you consider smallpox eliminated because disease with a similar severity is no longer present then we likely have to consider Covid eliminated as well
If there were extant strains of smallpox, even if less virulent, I would not consider it eradicated (but it could nonetheless be a significant success of medical advancements).
I'm not aware that we've eradicated any strains of SARS-CoV-2, let alone all of them, so I see no reason to consider COVID-19 eradicated.
Correct me if I'm wrong, but you seem to be suggesting that even if we did eradicate SARS-CoV-2, we shouldn't consider COVID-19 eradicated because there would still inevitably be some patients presenting with sufficiently-similar-but-less-severe symptoms from, say, influenza.
> Monkeypox and smallpox have very similar symptoms, meaning they could very well be considered the same disease even if they are believed to be caused by two different viruses.
I'm not aware of anywhere that equates the two diseases - they're different enough even just in symptoms (including severity) and response that it would seem counterproductive to do so.
If "could very well be considered the same disease" means more in terms of "well it's all just arbitrary labels - in theory we could've chosen to classify diseases more broadly" then sure, but it doesn't really take away from the accomplishment of eradicating [the thing currently know as the smallpox disease] even if it were to no longer be known as a disease on its own.
We (industrial societies) didn’t know for a long, long time that eventually, more energy usage (especially the type of energy usage we practice) would impact the planet the way that it has. Now we have the choice to put more effort into finding better ways to use the energy we have and develop newer energy tech, but populist politics and entrenched energy interests actively work against that.
AI doesn't simply use up our energy. It can also help reduce energy usage in many fields or improve energy generation. I would not be worried about energy when it comes to AI.
> it can also help reduce energy usage in many fields or improve energy generation
Do we have any examples of this happening yet?
My understanding is that these are aspirational goals of what AI could be if it is one day able to model the world around us and reason through novel approaches to solving problems.
I don't think anyone has argues for defining AI by transistor count, at east I've never heard that take.
Its based on capabilities and the question is always where does one draw the line.
It seems odd to me for the line to be so low that a simple ML algorithm running on embedded hardware would count as AI. What does the term AI mean at that point, and what term would you propose we start using for systems that are approaching or exceeding human intelligence?
To me, AI is something that learns or adapts how to perform a task, without being explicitly given instructions on it. I've almost always seen this definition used in this manner.
Something beyond human intelligence is also AI, just maybe a subset of general AI or strong AI (as opposed to narrow or weak AI).
I don't think I was fear mongering here, not sure if that was referencing someone else.
I also wasn't just talking about LLM energy use here. I agree most people's footprint from LLM use is extremely low relative to what else they do. There is likely lower hanging fruit, though you also must agree that doesn't mean we might as well use a little bit more just because its a small amount.
Maybe cutting LLM use out of your equation is small, but do you really need it? I'd assume not, I'd also assume there are larger items you could cut if you really looked and it wouldn't hurt your day to day life at all.
That said, I don't know you at all so I could very well be wrong on the specific case. I would stand behind the argument broadly though, most people could remove a lot of use from their lives and may even find themselves happier for it.
Yeah, the real point is that "AI uses a lot of energy: here's why you should be concerned" is just a post-hoc justification for someone who doesn't like AI and needs something to point to.
Agreed, that's not a useful argument when its posed that way. That is only one way of seeing it though.
The question, in my opinion, isn't whether LLMs are a huge energy suck and we should be worked. My question is whether its really needed at all, regardless of how much energy it may or may not use relative to other things.
People's needs really are pretty simple at the end of the day. Food, water, shelter, (the feeling of) safety, community. We just aren't good at defining enough and some people are very good at convincing us that we don't need to even consider that concept.
>Is it not still better to just avoid using all this energy though?
No, especially if there are ulterior motives of mUh eViL IA!111!1!one! as alluded to by parent commenter.
If people have problems with "AI" they should bring forth their concerns frankly on their merits, not coat it with tHiNk Of dA kIdZ! and appeal to emotions.
>Just because people waste a bunch of energy, water, wood, etc doesn't justify using it however you want,
Whoever "wastes" them is paying for them, generally speaking. If they really are concerned about the waste they'll stop of their own accord.
I am apt to agree but I reject this whole argument completely. This is just part of a de-growth agenda where every action is looked at through the lens of scarcity. Sometimes its environment, other times its equity, but it always leads to extinguishing human flourishing.
Examples:
- Having babies is bad because it increases the carbon footprint
- Having babies is bad because world is over-populated
- Giving advantages to your children is bad because other's aren't as advantaged
- Why invest in rockets when people are starving in [country]
- Honors classes are bad because the distribution of the participants is not proportional to the population
It's all just so exhausting and at this point I refuse to engage with it at any level. There is no winning.
The solution is to grow our way out of it — make more energy, make cleaner energy, develop new technologies — rather than browbeat people into submission with moral arguments because you don't like how they've chosen to deploy their resources.
Try putting it in a different context because I think you are weighing both “AI” and “household operation” the same. For the use case you mentioned (image generation) what do you think has a bigger impact on someone’s quality of life: 1) operating a home or 2) generating AI images?
I would argue most people would weigh the convenience of a house a lot higher. That doesn’t mean AI is a net negative, just that it’s important to frame the problem that focuses on what matters most to people (quality of life).
> what do you think has a bigger impact on someone’s quality of life: 1) operating a home or 2) generating AI images? [...] I would argue most people would weigh the convenience of a house a lot higher.
To meaningfully compare directly like this, you'd need to choose two things with similar energy usage/carbon footprint. Sure "the convenience of a house" is greater, but a person's usage of an image-generation model also consumes many orders of magnitude less energy.
For a balanced comparison:
* A full year of LLM usage (5 joules per token[0] * 200 tokens per query * 10 queries per day * 365 days per year = 1.014kWh, which by the US's energy mix[1] is about 0.4kg of CO2)
* A single cup of coffee (about 0.4kg of CO2 according to [2])
With that, I think most pepole would gain considerably more utility from the former.
What does "operating a home" mean to you? Boiling water in the evening for tea (most people will fill the kettle up)? 100 Wh = 50 prompts? Which one's more important?
For me? The tea. To my point above, it adds a lot more to my quality of life. For someone else, it may be completely different. I’m not saying one has to be valued over the other, just that the issue should be framed more appropriately.
(Your numbers would also not be accurate for me. I warm tea in a microwave, so it’s closer to 15 W-hr)
For other people AI is a tool for education, as it can explain concepts, is never tired and doesn't get bored. What is more important, a cup of tea or understanding that one thing you never had time for.
Are we really going to go down this road of bad-faith arguing? I could easily say “what’s more important, a learning a hobby or having shelter and food?” (Besides, this goes against the HN guidelines of assuming the strongest position of the comment).
My point is not to claim one value system over another, but to frame the problem in a more appropriate light. I think, even taking your bad-faith stance above, it is still framed as “what adds more to one’s quality of life?”
This article only discusses image generation, a small subset of AI use cases. As users, governments, corporations, and background processes grow to consume AI all day the water and power issues will grow the same. Whether AI or JavaScript, action is needed ahead of shareholder value.
Maybe they mean the water cooling systems used in the data centers. As far as I remember, it was a closed system so I don't know if that's significant.
Your numbers aside, how do you explain the prevailing "we need to build more power plants for AI" sentiment in some circles? Surely that indicates a significant carbon footprint.
(Especially when you consider how many are looking to coal and gas.)
Also, there's immense variation in the carbon intensity of electricity generation - an order of magnitude difference between Australia and France. How much electricity we use matters far less than how we're making that electricity.
Honest question. I remeber some recent news about Microsoft considering to bring back in use an "old" nuclear power plant to cover their (maybe projected?) cloud power consuption due to AI. If AI's power consumption is negligible and less than a short YouTube ad anyway, why the need to bring back to operation a nuclear power plant?
An individual user's model usage is negligible, in that you should not feel bad about querying an LLM unless you feel approximately 1000 times worse about drinking a cup of coffee. But Google/Microsoft provide services for hundreds of millions, or even billions, of users.
In the same way, if your service was hairdryers (owing to progress in hot-air-over-IP) and you had 300 million users, you'd need 300 million * 2000W * 10 minutes = 100000MWh daily, or about 5 copies of the nuclear power plant the Microsoft are opening. You could look at it as those hairdryer power plants already effectively existing, just distributed as a small portion of many plants.
Yes, I get that it is an issue of scale, but that's the issue in the end. A commenter rightfully pointed out that there's no point in giving examples of other things that consume the same or more if at the end of the day we don't take action on those anyway. Another commenter, equally rightly, answered that the utility of these things are subjective, and this changes what we feel we should take action on. I still have to find a clear position on this.
Enough for them to accumulate the work of an incredible amount of persons-hours of graphic design. Is this not ok, and instead to have a 500 watts/h TV on all day is fine?
Or having no public transportation whatsoever and move every day a 2000 kilogram vehicle with yourself inside back and forth from point A to point B? With incredible amounts of wasted energy?
You seem to be missing the collective action aspect. You’re saying the equivalent of “it doesn’t matter if I dump my car oil in the river, because it’s a negligible impact on the environment.”
It can be marginal at the individual level but very important if it scales to the societal level.
That's a fair critique of a bad analogy. But it still misses the point of the collective action problem. Consider a different analogy: you are free to burn as much fuel as you want, and individually it doesn't make much difference. But if everyone collectively takes that same stance, it leads to large negative externalities.
IMO the extreme libertarian view works fine for small groups but does not do a good job of managing collective action problems in large societies when the scope exceeds our psychological bandwidth.
Ignoring the fact that “central” planning is a nebulous and uninformative term, what do you propose when those “economic and physical” constraints are insufficient to mitigate the negative externalities of a collective action problem?
And it is true that if they sequester the CO2 that they generate, it's their entirely their business.
The problem is, the actions of others have an effect on me, my kids, and the rest of the world. One person's freedom ends where another's begins.
So whilst I generally feel people should mind their own business, I can't really see how we can just let people do whatever they want without at least having a public debate.
Such lectures are better to conduct somewhere in ultra-luxury resorts, (flying in private jet of course), to decide what "plebs" are allowed to.
Just a couple examples of hypocrisy.
1. COP28 in Dubai (2023): The 28th UN Climate Change Conference saw approximately 644 private flights associated with the event, resulting in around 4,800 tonnes of CO₂ emissions.
2. World Economic Forum in Davos (2022): During this annual meeting, 1,040 private jets flew in and out of airports serving Davos, leading to a significant increase in emissions compared to previous years.
3. COP29 in Baku (2024): The 29th UN Climate Change Conference in Baku experienced a notable rise in private jet arrivals, with 65 private jets landing in the week leading up to the event, doubling the number from the previous year.
I get it. Other people are worse than me, so I have no responsibility and do not have to change anything.
Same thing my kids say about their scholastic results. Very appealing, I get it.
Unfortunately it is not very constructive or responsible. It means only the very worst person has to do something while the other 8b of us can point fingers and sit back.
Sure, you can do. You can question wtf policy-makers are doing. instead, you question individuals decision (aka "how dare you") that are well within current laws.
> For 20 million users? Great, you just proved that AI energy impact is not big.
You missed that this was just a calculation of Midjourney power use, and you missed the next paragraph:
"Keep in mind that our example only touches image generation services. Let's not forget about other AI services like ChatGPT, Gemini, Claude, Deepseek—and all their versions and variations. The list, and energy waste, continues to grow."
> Almost everything is worse than AI, basically.
I am glad you brough this up. AI just adds to it all.
I 100% gaurantee you don't recycle. Perfect, enemy the good. That said, couldn't agree moreabout air travel or travel in general; a monument to man's self endulgent waste.
Doing calculations like these is a really bad idea because they muddy the waters, putting the focus on disputable numbers - unless you're willing to do a proper study and really dive into it. It's extremely tiring seeing all these well-meaning people, including possibly this author (unless they're writing this for clout), making this mistake time and time again. You're actively not helping.
There's a much better heuristic. Just look at the contracts that the model providers (MS, Google et al) have recently, suddenly established to buy lots of electricity, spin up nuclear power plants and so on. That's the giveaway and one that doesn't need any calculations. Plus the hundreds of billions being invested in datacentres purely for these models.
You see it in this very comment section, lots of naysayers purely based off the provided, utterly meaningless numbers, when the above facts say it all and renders everything else moot. If the training and inference wouldn't cost insane amounts of energy, they wouldn't be hastily taking up these contracts and investing such obscene amounts in datacentres. That's all you need to know.
We also need to consider that AI is being used as a replacement for things that use a lot less energy. Like, someone "asking ChatGPT" for a simple question is using a lot more energy then using a traditional search engine. This gets worse when AI is shoehorned into random stuff, thus making that stuff less efficient.
Completely agree. There are so many additional layers of electrical costs (and other costs) not factored in
The build/infrastructure costs. Especially the early models before the A100 and datacenters jumped on the bandwagon.
Training costs of the model being used
Training costs of failed models and tests
Training costs of previous models that have been superseded before their cost could be recaptured from users.
The actual energy used by AI is likely orders of magnitude higher than estimated here, but without really justifying all the numbers used and sourcing them its going to be a never ending argument between the pro and anti-AI crowed with the realists stuck in between trying to read the data.
In the example you compare something with 20 million daily users to the energy usage of 25.000 homes. I feel like comparisons like this mostly work (and feel scary) because we don't really grasp how much bigger 20 million really is. It feels a bit like scaremongering to me (without providing context of AI energy use vs other online activities/services).
This is a great point regarding what we ought to consider when adapting our lifestyle to reduce negative environmental impact:
> In deciding what to cut, we need to factor in both how much an activity is emitting and how useful and beneficial the activity is to our lives.
Although I would extend “our lives” to “society”. His own example with a hospital emitting more than a cruise ship is a good illustration of this; and as a more absurd example it would drastically cut the emissions if we remove all humans and replace them by LLMs (which sort of defeats the entire point, obviously, because LLMs are no longer needed).
Continuing this line of thought, when considering your use of an LLM, you ought to weigh not merely its emissions and water usage, but also the larger picture as to how it benefits the human society.
For example: Is it based on ethically sound approaches? (If it is more like “ends justify the means”, do we even know what those ends are?) What are its the foreseeable long-term effects on human flourishing? Will it (unless regulated) cause a detriment to livelihoods of the many people while increasing the wealth gap with the tech elites? Does it negatively impact open information sharing (willingness to run self-hosted original content websites or communities open to public, or even the feasibility of doing so[0][1]), motivation and capability to learn, creativity? And so forth.
...how do you think you got your job? You ever see those old movies with rows of people with calculators manually balancing spreadsheets with pen and paper? We are the automators. We replaced thousands of formerly good paying jobs with computers to increase profits, just like replacing horses with cars or blacksmiths with factories.
The reality of AI, if AI succeeds in replacing programmers (and there's reason to be skeptical of that) is that it will simply be a "move up the value chain". Former programmers instead of developing highly technical skills will have new skills - either helping to make models that meet new goals or guiding those models to produce things that meet requirements. It will not mean all programmers are automatically unemployable - but we will need to change.
A few questions popped in my head. Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore? For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans? By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
> Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore?
I mean, education will have to change. In the early years of computer science, the focus was on building entire systems from scratch. Now programming is mainly about developing glue between different libraries to suit are particular use case. This means that we need to understand far less about the theoretical underpinnings of computing (hence all the griping about why programmers don't need to write their own sorting algorithms, so why does every interview ask it).
It's not gone as a skill, it's just different.
>For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans?
So I had a eureka moment with AI programming a few weeks ago. In it, I described a basic domain problem in clear english language. It was revealing not just because of all the time it saved, but because it fundamentally changed how programming worked for me. I was, instead of writing code and developing my domain, I was able to focus my mind completely on one single problem. Now my experiences with AI programming have been much worse since then, but I think it highlights how AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote. I instead get to focus on the more fun parts. The fulfilling parts.
>By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
I think the best precedent here is the start of the 20th century. In this period, elites were absolutely entrenched against the idea of things like increasing worker pay or granting their workers more rights or raising taxes. However, I believe one of the major turning points in this struggle worldwide was the revolution in Russia. Not because of the communist ideals it epoused, but because of the violence and chaos it caused. People, including economic elites, aren't marxist-style unthinking bots - they could tell that if they didn't do something about the desperation and poverty they had created, they would be next. So due to a combination of self interest, and yes, their own moral compasses, they made compromises with the radicals to improve the standard of living for the poor and common workers, who were mostly happy to accept those compromises.
Now, it's MUCH more complicated than I've laid out here. The shift away from the gilded age had been happening for nearly twenty years at that point. But I think it illustrates that concentrating economic power that doesn't trickle down is dangerous - creating constant social destruction with no reward will destroy themselves. And they will be smart enough to realize this.
> AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote.
I like to think that the best kind of automation when it comes to writing code is writing less code, but instead writing it with strategic abstractions embodying your best understanding of subject matter and architectural vision.
> Maybe it's a negative for you if you already have marketable skills, but a positive for others who want to get in.
I am not fully clear, to get in on what? The skill that is valued less and less? Or on being an LLM prompter? How much would a rational management be willing to pay a prompt writer (assuming they cannot automate that as well in the first place)?
yeah but we don't have time to analyse this for years and years while upping our power consumption. In the end we consume too much dirty power and have to change this, and quickly. AI is worth nothing if the world is burning.
> yeah but we don't have time to analyse this for years and years while upping our power consumption
this is 100 % true. we also don’t have time to debate the morality and necessity of each specific activity for years. if AI energy use is indeed as small as some comments here suggest, ignoring it to focus on improving things like heating, cooling, and transportation could be a better course of action.
Is it energy waste if it does something productive? For me, energy waste is energy which, if not used, wouldn't have any noticeable impact.
For instance, most city lights are on even when nobody needs them (using smart activators would allow us not to waste energy.
Energy truly wasted on manufacturing items which are never used, that's wasted.
But generating images, text and code to allow people to make decisions, iterate on ideas, etc. That's not a waste in my opinion, that's actually using energy in one of the most interesting parts of our lives: being creative.
>Is it energy waste if it does something productive?
To the extent that it’s not a 100% efficient system, yes. A quick search says about 60%-80% of CPU energy is waste heat. On top of that, it’s usually required to be removed by air-conditioning, so there is considerable “waste” even if you value the end product.
A manufacturer doesn’t get to claim all their scrap isn’t really waste because the rest was used to build something meaningful. So tracking waste is still important.
> A quick search says about 60%-80% of CPU energy is waste heat.
That's an odd metric. Doesn't effectively all input energy leave the CPU as heat (ignoring reactive power and stuff like that)? Is there a theoretical minimum energy-per-unit-computation that makes up that 20%-40%?
That’s a good point. I’m definitely biased by thinking about it in terms of mechanical work. Some work is getting done, even if it’s just electrons being pushed around in transistor junctions. Maybe sometime with a better background can clarify.
Using ai to generate digital artifacts is creative?
To me it seems the opposite. It requires no skill, no intuition, no mastery, and contributes nothing to the human condition.
Worse: it parasitically inserts itself where real creativity should alight in a person’s life and lead them to mastery of some domain in the real. (And inevitably, to a more profound philosophical relationship with their existence, and the loosening from what entraps them)
Not saying it is all bad, but it is certainly not on the same footing as putting time and effort into understanding poetics or painting or music and the internal rewards practices like those build into a person’a life.
Churning out digital images and generative music is not equivalent.
And it takes so much more energy and resources to get to it! My god.
Its the most joint physically and existentially wasteful thing we’ve ever involved ourselves with as a species! It is an extension of the same mind that churns out endless plastic crap for human “consumption” (read: garbage piles).
Instead of fighting so hard for something so empty why don’t you pick up a book you haven’t read? There are worlds of understanding that are lying just beyond the arrogance of “tech people” everywhere.
If I sound hostile, its because the tech bros fired the first shot. And, a famous line from a hard-learned poet (not an industrial rhyme machine): I will not go quietly into to that good night.
Not when there is a new black magic priestclass spewing blasphemy against the human spirit.
From a purely technical standpoint: what exactly productive has this thing done for us? I hear lots of fortune telling about it, but I haven’t seen results. Just more stupid memes. If its all the drug research and protein folding then millions of people probably don’t need access in order to skimp on their studies.
(Side rant: the new Ai built into my phone’s keyboard makes me want to throw it more often than it helps me type the right words. The keyboards worked better in the former incsrnations)
Decorating is a form of creativity. I might not have drawn any of the art in my house myself (it was made by other human beings in my case, most of whom I've never met) but the choices I made on which art to present, how to position each on my walls, and which room I placed them in were a form of self-expression. Am I going to claim it's a deeply meaningful form of self-expression? No, I'm not. But I think the snearing attitude towards the modern equivalent of teenagers cutting out pictures from magazines and sticking them on their walls is a bit misguided.
The last defence of the arrogant. You must hang onto this wasteful device so much that you will twist the words of others to suit your agenda.
I’m not sneering at play. The opposite. The black magicians as I’ve labelled them here have parasitically replaced real play with the task of training their machines of future control. (Doubt me? They shout this shit at bloody conferences and on podcasts)
Next time you use generative AI try to use a prompt and iterate on it. We usually prompt generative AI, maybe you are using it without prompting. Who knows.
On my MBP M2 Max 16" with 92GB of RAM, 2 prompts on llama3.3:70b or 1 prompt on deepseek-r1:70b take about 10% of battery. Really makes me nervous when I think about people (ab)using gpt-o1 or 4o for everything.
For a prompt to make the cut, I usually judge whether the task (or the documentation) is boring enough, time-consuming enough, yet precise enough for an llm to give me the solution. If something requires more than three-five prompts then I need to work on it myself and come back with smaller and more precise questions.
IMO, if the green energy industry wants to penetrate an important domain of modern life fast, the serving of open source models seems like a really low hanging fruit.
With 10% of battery I can code for 3 hours. If I compare the result of this activity with the 10% of battery wasted on a single question whose answer probably won't be completely correct, I consider the latter to be excessive, in relative terms, even if I take into account my time cost. Theoretically, if I would need to ask 100 questions to complete one task, I would need to recharge my computer 10 times. This is why I offload the burden of energy usage to a third party such as OpenAI.
BTW I don't drive if I don't have to either, which might make me more sensitive than you to this, comparatively low, energy usage diff.
funny, I actually think the energy use was high not low. Like energy to move a 1000 KG thing for 100 meters just to answer a simple question? Try doing that with your own muscles.
And that from the system that is using like 10% of the energy then OpenAI. So for OpenAI every useless request is like pushing 1000KG for 1KM.
If something uses electricity then the footprint is mostly about the source of that electricity.
Solar, Wind and nuclear are magnitudes cleaner than fossil fuels.
So, the correct answer is to clean up the grid. Which is relatively easy and cheap and impacts many fields outside AI, further speeding us down the cost curve to cheaper cleaner power.
And once that is happening, you can start moving things that don't even use electricity at the moment like transport, heating and industrial processes to electrical usage. That task, which we should do because it's cheaper and cleaner and will prevent catastrophic climate change, will require advanced nations to double their electricity usage so data center usage is not really moving the needle.
You can also build the datacenters somewhere where there is plenty of carbon free energy available. Google has just recently bought two plots in Northern Finland, near some serious grid connections, and they're not the only one with plans.
Finland can build and has been building a lot of wind power lately since it's flat and quite sparsely populated but with relatively good infrastructure. There are lot of projects in the planning pipelines just waiting for demand. Electricity prices are among the lowest in Europe already.
At the same time, let's also not rush to hand wave away issues just because "others are likely worse".
Those industries also are an issue, and the question is also applicable there. In fact, the impact of tourism on the environment and how to reduce that impact is one that is on going.
In fact, exactly because it is an emerging technology now is a good time to be conscious about environmental impact. Because it is easier to address issues now than later down the line when it is no longer emerging and already ingrained.
Aggregating load is useful to get a sense of overall scale, but per use costs are probably a more realistic proxy. One query is ~2Wh. At average retail pricing (17.5c per kWh) that's like... 0.035c per query. You're at like 30 queries before reaching a second cent of electricity.
A monitor is like ~10-20W. So one query is like having your monitor on for an extra 12 minutes a day.
At this scale, there's going to be no consumer side incentive to conserve.
The author seems to assume that each daily user is running a new prompt every hour. I suspect the actual median number of images per DAU is lower than that. A user doing 20Wh roughly corresponds to heating a single meal in the microwave, it probably is not a leading cause of their carbon footprint.
If, in order to offset the cost of my AI use, I had to reduce the thermostat by 1/10 of a degree I'd take that trade. But I'm pretty sure reducing my carbon footprint much more than stopping AI use.
Gov should fine/tax emissions. If the emissions are too high then tax it more. End of story.
Does it really matter who is doing it and for what purpose?
This reminds me of the internet going crazy over Taylor Swift flying everywhere with her private jets.
It just smells like trying to take blame away from the gov and putting it on private people/companies who are just following the law.
As long as you’re paying a fine proportional to the damage you’re causing, I don’t care whether it’s for AI or crypto or just pumping carbon for the sake of it.
"Can we calculate the energy cost of AI models available as services online? The truth is-we can't, not without access to their datacenters."
If "AI" energy use is nothing to be concerned about, as alleged by the top comment, then the "AI" companies might benefit from releasing data showing the (low) energy and water consumption. As it happens, none of them will release the data public wants about energy consumption. It is believed this withholding of data is intentional.
Can't find where the A100 figure comes from, but an RTX 4090 can generate more than an image a second[0], which assuming constant power usage of 450W would be 0.000125kWh - compared to, say, 2.4kWh for a pot of coffee[1].
If you're running it as a service, you'd likely optimize the model with TensorRT and run on hardware intended for inference, instead of consumer/training GPUs, so I don't think either of these figures can be extrapolated to Midjourney.
Don’t forget the US administrations plan to, “win the AI race,” which includes building a ridiculous number of new data centres and methane power plants… and to drill, baby, drill.
AI tech is quickly becoming a concerning contributor to the mix of emissions driving climate change.
I realize that in some future state someone, somewhere will figure out how to make hardware that draws less power and requires less cooling. Some researcher at some point will find a more efficient algorithm. And all of it will run off of green energy. I don’t think we have time to mess around and find out if that will happen.
We’ve recently seen years about 1.5C and if we continue on this course it’s quite likely that will become the average.
AI might play a significant role in contributing to the crisis. And once VC and government subsidies run out, if these companies have to pay for the externalities, will it still be profitable?
I was skeptical that Bitcoin still uses more than AI but it seems to indeed be the case (or close) - current Energy estimates for Bitcoin recently are at 175 TW/h[0] (though I don't know if those are extra pessimistic or not).
For AI the total consumption is perhaps a little less currently (the figures I find quickly differ) but mostly projected to go over twice that in the next couple of years[1].
This doesn't account for the server they are hosted in, their CPU loads, losses in the PSU, which will inflate those numbers higher.
Then there is the cooling, which needs to remove every watt consumed by the GPU/servers, as its all outputs as heat (multiplied by the PSU losses).
Things are amplified further by the inefficiencies of the cooling process, including cooling the HVAC hot side (and the water consumption of the cooling towers), and building insulation.
Then the article only accounts for Midjourney, which is only one of the operators in this space. All the major tech companies have huge fleets of server working on this.
Thanks. There is a lot of lumping together in the above link. I would love to see each platform (and AI platforms as well) and their annual energy consumption.
I agree that more detailed information would help further the discussion a lot. I am also interested in the mechanics of how these things are calculated.
What if we built more nuclear reactors and solar and wind farms... and then didn't use it to power LLMs, but instead were able to use less fossil fuels? :-)
Water needed for the power generation and onsite cooling seems to be the higher risk and impact. Aquifer depletion has real and permanent impacts to the community.
If we assume the indicated 400 W load with a total processing time of 20 s per user query, and 20 mio queries per day (‘daily users’), shouldn’t the daily consumption be closer to 44 kWh? (20 mio x 0.4 kW x 20 s / 3600 s/h) Other than that, the energy use for LLMs is obviously substantial. I seem to remember that the operating costs should be fairly evenly split between hardware (i.e., GPU) depreciation and energy costs.
> On the other hand everyone is concerned about the planet's health
This whole article rests on this faulty premise.
It’s like saying everyone is concerned about shark attacks. Everyone’s not THAT concerned or the beach would be a lot less crowded.
“Planet health” is probably not even in the top 1000 of concerns for most people. They may claim that it is. But then ask what they’ve done about it and they’ll likely say, “I’m just one person what I do won’t matter…”
Something being a concern and doing something about it are two different things.
I’m concerned about plenty of things, for example regarding my health or relationships, yet mostly fail to take action to address these concerns. The same holds true for the state of this planet; climate change and microplastics all over etc.
I’m sure ”planet health” is in the top1000 concerns for most people not denying climate change.
I guess that’s where I see a difference. If you look at a problem and say, “I definitely care about this, I’m just failing to take action,” I’ll say, “Can’t be that big a problem then…”
Millions of people have been displaced in 2024 due to climate induced disasters. I’m pretty sure there are quite a few people concerned about climate change.
If a person was displaced in 2024 due to climate induced disaster, do they care about the climate or do they care about being displaced by the disaster?
If those people hadn’t been displaced in 2024 due to climate induced disasters, what would they be doing today to prevent climate induced disasters?
> I’m pretty sure there are quite a few people concerned about climate change.
I never said there weren’t. There’s like 8 billion people on Earth. I said for most of them, climate change is not in their top 1000 concerns.
We can jump into whatever equivalent of Zenos' paradox this is.
Not only has their house been burned to the ground, insurance has refused to cover the cost, and they have no idea where their pets are; but they're hungry because they haven't eaten in hours.
Do they care more about being displaced by the disaster or eating?
I think it can be said they care about both.
Research shows that 3.6 billion people live in areas threatened by climate disaster-induced change.
Just because some of them are hungry at any given moment doesn't remove the fact that they can also be concerned that the land where they grew up might be uninhabitable before they retire.
> We can jump into whatever equivalent of Zenos' paradox this is.
I don’t know what that is but OK.
> Do they care more about being displaced by the disaster or eating?
> I think it can be said they care about both.
I don’t disagree but this is also not at all what I asked.
People care about the things in front of them.
Nobody really cares about climate change until it’s in front of them in terms of a burnt down house or chest-deep flood waters.
If they did, they would do something to address it. But they don’t.
Because addressing it (for the wealthiest people who have the ability to address it) means making significant changes to their lives.
You can tell me that you care about the climate all you want but until you’re willing to sacrifice something from your life as a result, I’m not particularly interested in hearing anything else.
Personally, I have never driven a car, live in a small apartment, don’t eat meat, and don’t fly in airplanes. I do those things because I like them. If driving a car, eating meat, and taking cross-country flights were keys to fixing the climate I wouldn’t do them!
I care about myself more than I do climate change. The same as essentially everybody else.
You are drowning in hypotheticals muddied by your own worldview. I can also say that there are 8 billion people on Earth. For most of them, green isn’t in their top 5 favorite colors.
I mean, the revealed preferences of the voters of Washington State seem to suggest it's a much higher than top 1000 issue to them. About 2% of the state's budget comes from selling CO2 emissions rights in a cap and trade scheme, approved by voters directly as an initiative, all of the revenue from which is earmarked for environmental projects. And that's ignoring the existing gas tax and revenue from other sources invested in CO2 reduction.
Obviously YMMV by jurisdiction but I think it's well above the 1000th priority for many people in the developed world. Though you're right to point out that it falls far below things like education, or health, or even recreation, all of which AI could potentially have a hand in improving.
That is the problem we have. Saying you are "concerned" is one thing - acting on it is quite another. It would mean leaving one's standard and comfort zone and facing some unpopular realities that are incompatible with the more comfortable devil-may-care attitude:
- Not flying around the world for vacations and events
- Buying and consuming less of almost everything that is not essential to life.
- No more fancy AI-generated images just for fun and useless show-offs.
- ...
The list goes on, but even acknowledging some of it does not mean acting on it.
Hell, yeah, we better leave the consequences to our grandchildren.
OP is making a decision about a complex problem using only back of the envelope calculation and without looking for scientific studies on the matter.
We’re not obliged to take their advice.
Typing “AI Carbon Footprint” on Google Scholar brings much better info than this post.
As other's have said, arguing with disputable estimate isn't great. It's even worse if you don't understand physical units and miscalculate. Assuming the assumed 2Wh per prompt are correct, the calculation "2Wh * 20mio users * 24h" for a daily consumption makes no sense, since multiplying Watt-hours with time does not work.
If we assume that each user generates one prompt per day, the actual (still very disputable) calculation would be 2Wh * 20mio users = 40mio Wh usage per day, which would be - according to the author's comparison - around 1000 households per day. Which isn't a lot.
I believe that energy consumption with AI is a large problem, but articles like this unfortunately do not provide a basis for arguments.
The good news is that these costs are still in exponential decline and will be for at least a few years. The early iterations are costly, but later versions will be cheap.
The bad news is that because we live under Hobbesian capitalism, (a) the high-value target is human labor (i.e., the ruling class is deciding the middle one is an undesirable expense and a competitive threat) rather than ecological harm, and (b) we will probably see Jevons Paradox arms races that almost no one wants as warmongers and billionaires one-up each other.
The technology itself, nevertheless, could very much be a force for good.
The right reflex here is to push for clean energy usage; not to constrain energy usage. There's this false narrative that constraining energy usage helps solve global warming. It doesn't. It helps slow down the pace at which the problem gets worse. Which sounds like it's better than nothing but isn't that helpful.
So far carbon emission growth is slowing but it hasn't negated. We need negative growth of carbon emissions. But preferably without killing our economy, which like it or not generally involves increased energy usage. It's economic growth that generates a dynamic of stimulating the fastest growing (generally poorest) nations to consume more energy. Without too much concern for the damage they are doing.
So, killing sources of energy usage (aka. economic growth) is not a solution. We can all live like frugal monks and hope for the best. Except there is no royal "we" and good luck convincing the 8 billion or so people on this planet to do so. Especially in those fast growing less wealthy nations. It's a really hard sell.
The alternative is switching to clean energy, which we are actually doing at an accelerating rate. It's the thing that works best. It's the thing that has a measurable impact. Like right now. And the irony is that creating more demand for clean energy actually reduces the proportion of dirty energy we have to worry about. To the point where fixing that becomes a manageable challenge. A lot of places already generate double digit percentages of their energy with renewables on an annual basis. In a few decades that's going to result in some notion of net zero at which we're carbon negative world wide. That's a trend we can accelerate that actually creates economic growth.
So, AI and energy usage is both an energy challenge and a massive economic opportunity. Meeting this challenge isn't about restraining the use of AI but about sourcing the required energy for it sustainably. There's no good excuse for firing up a lot of gas generators to power AI; which is something Elon Musk has done at his AI company, for example. That's just bad. And of course our usual cloud providers are still a bit hand wavy about where the energy for their data centers comes from. Planting trees (or other carbon offset nonsense) isn't the same as not burning any fossil fuels. One doesn't compensate for the other. People need to stop burning stuff, not compensate for the fact that they are burning stuff.
But luckily, renewables aren't growing because they are green but because they are cheap. Cheap energy from the sun or from wind is an easy sell. Even to climate change deniers. Renewables are attractive to any large scale users of energy. Like data centers. That's a good thing.
The solution lies in making cloud providers hurry up their transition to green energy. They are super profitable so there's little excuse on that side. They can afford to invest in lowering their energy bills. Their clients could be a bit more critical of with whom they choose to host. Even a little bit of pressure from that side is helpful. If AWS starts losing big customers because their data centers are too dirty, they'll bend over backwards to fix it.
This is one of my main concerns with the current trend of shoving AI into everything.
Yes, it's a cool new technology and very useful for some things.
No, it's not a great fit for literally everything and by adding it to things where it doesn't actually add value, we're not only making products worse, we're also burning down our already dying planet for no reason.
I've thought it before and I'll think it again, renewable energy is not enough on its own; energy consumption should be slashed as well, and datacenters (of any kind) are not helping.
For 20 million users? Great, you just proved that AI energy impact is not big. Like a fraction of 20 million users over-using AC like it happens, for instance, in the US. Or flying for work continuously without a good reason, also super common, and incredibly more wasteful (~30h flight time of a 737 is like the energy needed for a training, not inference, of a model with 100s of billions parameters).
The point about AI and energy does not make sense, it exists only because there are people that are worried about AI, and need to find something to show it creates damage.
You can install Flux in your fucking laptop and generate images there, and you'll see that to make it as hot (or discharge as quickly) as when you play any modern videogame, it takes a lot of efforts and generated images. And you are running on a battery...
Want to talk about all these stupid Javascript frameworks, that make loading a trivial page so wasteful, energy-wise? Almost everything is worse than AI, basically. The fact that this post is upvoted here on HN, where once there was more critical thinking, is testament on how bad nowadays we are doing.