Google’s megawatt move for AI: Revamping power and cooling

Share the Post:

+/-400 VDC power delivery: AC-to-DC sidecar power rack

At the 2025 OCP EMEA Summit, Google underscores a key evolution in power delivery: the shift from 48 volts direct current (VDC) to a new +/-400 VDC standard. This advancement enables IT racks to scale from 100 kilowatts to as much as 1 megawatt. Google also announced plans to contribute its fifth-generation cooling distribution unit (CDU), known as Project Deschutes, to OCP—an effort aimed at accelerating the industry-wide adoption of liquid cooling.

Transforming power delivery with 1 MW per IT rack

Nearly a decade ago, Google led the push to adopt 48 VDC within IT racks, dramatically improving power distribution efficiency over the legacy 12 VDC systems. That initiative successfully scaled IT rack capacity from 10 kilowatts to 100 kilowatts, supported by broad industry collaboration.

Now, the demands of AI require even more robust power solutions. Machine learning workloads are expected to exceed 500 kW per IT rack before 2030, and the densification of IT racks—with tightly integrated xPUs (e.g., GPUs, TPUs, CPUs)—calls for a high-voltage DC power distribution system. In this architecture, key power components and battery backup systems are relocated outside the rack.

In its blog, Google mentions that it is introducing a +/-400 VDC power delivery system capable of supporting up to 1 MW per rack. This development is more than a capacity increase—it enables the use of supply chain components originally developed for electric vehicles (EVs), thereby benefiting from economies of scale, improved manufacturing efficiency, and enhanced quality.

As part of the Mt Diablo project, Google is collaborating with Meta and Microsoft through OCP to standardize the electrical and mechanical interfaces. A 0.5 draft specification will be made available for industry feedback in May. The first implementation is an AC-to-DC sidecar power rack that disaggregates power components from the IT rack, improving end-to-end efficiency by approximately 3% and dedicating more space within the rack for xPUs. Looking ahead, Google is exploring higher-voltage DC power distribution directly within the data center and to the rack, aiming for even greater efficiency and density.

The liquid cooling imperative

With chip power consumption rising from 100W to over 1000W for modern accelerators, advanced thermal management has become essential. Densely packed, high-performance chips present serious cooling challenges. Liquid cooling has proven to be the most effective solution, thanks to water’s superior thermal and hydraulic properties—it can carry approximately 4000 times more heat per unit volume than air and has 30 times greater thermal conductivity.

Google has implemented liquid cooling at scale across more than 2000 TPU Pods over the past seven years, achieving remarkable system availability of about 99.999%. The company first introduced liquid cooling with TPU v3 in 2018. Liquid-cooled ML servers take up nearly half the volume of their air-cooled equivalents by replacing large heatsinks with compact cold plates, allowing for a doubling of chip density and a fourfold increase in supercomputer size from TPU v2 to TPU v3.

Google has refined this technology through successive generations—from TPU v3 to TPU v4, TPU v5, and most recently, Ironwood. Its approach utilizes in-row coolant distribution units (CDUs) with redundant systems and uninterruptible power supplies (UPS) to ensure high availability. These CDUs isolate the rack’s liquid loop from the facility loop, forming a controlled, high-performance cooling environment via manifolds, flexible hoses, and cold plates attached directly to high-power chips. Project Deschutes, Google’s latest CDU architecture, includes a redundant pump and heat exchanger system, which has underpinned its near-perfect uptime since 2020.

Later this year, Google will contribute the fifth-generation Project Deschutes CDU to OCP. This contribution will include system designs, specifications, and best practices aimed at accelerating industry-wide adoption of scalable liquid cooling. The insights shared stem from nearly a decade of experience and will encompass:

  • Design for high cooling performance
  • Manufacturing quality
  • Reliability and uptime
  • Deployment velocity
  • Serviceability and operational excellence
  • Supply ecosystem advancements

Next generation of AI

While the industry has made significant progress in power delivery and cooling, the accelerating pace of AI hardware development demands an even faster evolution of data center capabilities. Google sees tremendous promise in the widespread adoption of +/-400 VDC, enabled by the forthcoming Mt Diablo specification. It also encourages industry peers to adopt the Project Deschutes CDU and apply its extensive experience in liquid cooling.

Related Posts

Scroll to Top