AI has a hidden water cost − here’s how to calculate yours

AI systems’ water usage can vary widely, depending on where and when the computer answering the query is running.

Author: Leo S. Lo on Sep 01, 2025
 
Source: The Conversation
How many AI queries does it take to use up a regular plastic water bottle's worth of water? kieferpix/iStock/Getty Images Plus

Artificial intelligence systems are thirsty, consuming as much as 500 milliliters of water – a single-serving water bottle – for each short conversation a user has with the GPT-3 version of OpenAI’s ChatGPT system. They use roughly the same amount of water to draft a 100-word email message.

That figure includes the water used to cool the data center’s servers and the water consumed at the power plants generating the electricity to run them.

But the study that calculated those estimates also pointed out that AI systems’ water usage can vary widely, depending on where and when the computer answering the query is running.

To me, as an academic librarian and professor of education, understanding AI is not just about knowing how to write prompts. It also involves understanding the infrastructure, the trade-offs, and the civic choices that surround AI.

Many people assume AI is inherently harmful, especially given headlines calling out its vast energy and water footprint. Those effects are real, but they’re only part of the story.

When people move from seeing AI as simply a resource drain to understanding its actual footprint, where the effects come from, how they vary, and what can be done to reduce them, they are far better equipped to make choices that balance innovation with sustainability.

2 hidden streams

Behind every AI query are two streams of water use.

The first is on-site cooling of servers that generate enormous amounts of heat. This often uses evaporative cooling towers – giant misters that spray water over hot pipes or open basins. The evaporation carries away heat, but that water is removed from the local water supply, such as a river, a reservoir or an aquifer. Other cooling systems may use less water but more electricity.

The second stream is used by the power plants generating the electricity to power the data center. Coal, gas and nuclear plants use large volumes of water for steam cycles and cooling.

Hydropower also uses up significant amounts of water, which evaporates from reservoirs. Concentrated solar plants, which run more like traditional steam power stations, can be water-intensive if they rely on wet cooling.

By contrast, wind turbines and solar panels use almost no water once built, aside from occasional cleaning.

Large concrete towers emit vapor into the atmosphere.
Cooling towers, like these at a power plant in Florida, use water evaporation to lower the temperature of equipment. Paul Hennessy/SOPA Images/LightRocket via Getty Images

Climate and timing matter

Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.

Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.

Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.

And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.

These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.

A simple skill you can use

The type of AI model being queried matters, too. That’s because of the different levels of complexity and the hardware and amount of processor power they require. Some models may use far more resources than others. For example, one study found that certain models can consume over 70 times more energy and water than ultra‑efficient ones.

You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.

Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.

Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.

Independent researchers and industry reports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.

Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.

Here’s the one-line formula you’ll need:

Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)

For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.

For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.

If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.

A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.

Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.

Comparisons can add context

To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.

When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.

Using independent estimates and Google’s official reporting gives a sense of the possible range:

  • All Google Gemini median prompts: about 650,000 liters per day.
  • All GPT 4o medium prompts: about 8.8 million liters per day.
  • All GPT 5 medium prompts: about 97.5 million liters per day.
A small black spigot spews a stream of water over a green grass lawn.
Americans use lots of water to keep gardens and lawns looking fresh. James Carbone/Newsday RM via Getty Images

For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.

Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.

But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.

Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.

Leo S. Lo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read These Next