What It Takes to Talk to AI

Interacting with AI has become part of everyday life. GPT-4.5 helps draft emails. Google’s Veo can turn a sentence into video. Image generators stylise selfies in the style of Ghibli or Pixar. It all feels fast, useful, and light.

But what makes these tools feel so effortless is a vast amount of physical infrastructure – infrastructure that consumes significant energy, water, and raw materials. The environmental cost isn’t theoretical. It’s measurable, ongoing, and growing.

If we want to keep building smarter systems, we need to understand what they’re built on.

Let’s Start With the Chips

AI models rely on specialised processors like GPUs and TPUs. These aren’t designed for general-purpose computing. They’re made to run billions of calculations at high speed, often over extended training periods.

Manufacturing them is resource-intensive. Semiconductor fabs operate around the clock in controlled environments that use vast amounts of electricity and ultra-purified water. Taiwan’s TSMC, which produces chips for NVIDIA and Apple, reportedly used more than 150,000 tonnes of water per day during peak production periods. During droughts, this demand required emergency water transport to keep factories running.

Each chip has a footprint that includes mining, energy consumption, and cross-continental shipping. They are central to the AI economy and to its environmental impact.

Then Comes the Training

Once the chips are manufactured, they’re deployed into data centres where training takes place. Training a large model like GPT-3 required approximately 1,287 megawatt-hours of electricity. That’s comparable to the annual usage of more than 120 US homes.

GPT-4.5, being larger and more complex, likely consumed significantly more, though exact figures haven’t been disclosed. These training runs aren’t rare. Dozens of companies, universities, and research labs around the world run similar operations regularly.

And once a model is trained, it’s not retired. It enters deployment and runs continuously, responding to queries, generating content, and supporting a wide range of applications. All of that activity takes place in power-hungry server rooms.

Water for Every Prompt

Cooling is a major part of AI infrastructure. Data centres generate significant heat, and many rely on evaporative cooling systems. These systems can consume millions of litres of water daily.

According to data cited by The Washington Post, a 100-word response from ChatGPT-4 consumes around 519 millilitres of water and 0.14 kilowatt-hours of electricity.

That’s just one prompt. When multiplied across millions of users, the impact is considerable.

These aren’t isolated figures. They reflect how much energy and water is needed just to keep servers operational – not to mention the indirect costs of producing, transporting, and maintaining the hardware itself.

Daily Use Adds Up

AI is used for a wide range of tasks. Some of these are clearly valuable – helping with research, accessibility, or automation. But much of the day-to-day use is casual or personal.

In recent weeks, people have used AI tools to:

  • Generate synthetic voice notes for entertainment
  • Turn their selfies into stylised illustrations
  • Ask ChatGPT to write Instagram captions or short poems
  • Create memes, jokes, or fake product descriptions for fun

On their own, these actions seem harmless. But inference — the act of generating responses — is a continuous cost. Unlike training, which happens once, inference happens every second of every day.

Inference is the term used for when a trained AI model is actually used – generating a reply, an image, or a result. It’s what powers each interaction after the model is built.

Studies have found that a ChatGPT query can consume roughly five times the electricity of a standard web search. Multiply that by daily usage patterns and it becomes clear that the infrastructure load is not small.

Video Models Multiply the Demand

While text generation is resource-intensive, video generation takes it further. Google’s Veo, which can create high-definition videos from simple text prompts, is part of this shift.

Video models require far more compute power. They need to generate multiple frames per second, maintain temporal consistency, and manage much larger file sizes. The hardware requirements for training and running these systems are significant.

In 2023, major chip producers shipped over 3.85 million GPUs for data centre use – a figure expected to grow. These chips are central to new AI capabilities, but they come with clear environmental costs at every stage of their lifecycle.

But AI Will Help the Climate, Right?

There are AI tools being developed to help with climate modelling, biodiversity tracking, and energy optimisation. These applications have real potential.

However, without transparent reporting on energy and water usage, it’s difficult to assess whether the benefits outweigh the environmental cost. If building the system produces more emissions than it helps avoid, the trade-off needs to be re-evaluated.

Claims of AI being “for good” are easy to make. They’re much harder to verify without clear metrics.

What Needs to Change

AI is not going away. The tools are improving, the infrastructure is expanding, and the demand is growing. But that makes it even more important to question how we build and use these systems.

Some starting points:

  • Public disclosure: Companies should publish model-level energy and water usage.
  • Model efficiency: Smaller, optimised models are often enough. Not everything requires scale.
  • Infrastructure planning: Data centres should be located near renewable energy sources and built with efficient cooling in mind.
  • Better defaults: Shared usage, prompt optimisation, and caching can all reduce unnecessary load.
  • Environmental accounting: We need to measure actual impact, not rely on best-case assumptions.

The Cost Behind the Convenience

AI doesn’t just live in apps or browser tabs. It lives in server racks, water pipes, chip factories, and power grids. Every interaction we have with it is backed by real infrastructure and real consequences.

We often talk about AI as if it’s separate from the physical world. But it isn’t. It’s embedded in it. Every response, every generation, every shortcut we take through a model like ChatGPT or Veo is built on energy that has to come from somewhere, and water that goes somewhere else once it’s used.

The future we’re building with AI isn’t just measured in accuracy, speed, or innovation. It’s measured in emissions, in supply chains, in ecosystems under pressure. The more we scale, the more that pressure grows.

So next time you open an AI tool, ask yourself a different kind of question. Not just “Can it do this?” but “What did it take to make this possible?”

That question might be the most intelligent thing we ask all day.

You May Also Like