AI workloads are changing the thermal profile of data centres faster than air cooling can adapt. What worked for conventional enterprise computing is breaking down under the heat generated by modern AI accelerators. The result is a growing industry pivot toward direct-to-chip liquid cooling.
The shift is not cosmetic. Large language models and other AI systems rely on power-dense GPUs and custom silicon that push rack power well beyond traditional limits. Average enterprise racks once drew around 10-15 kW. AI-driven deployments are now pushing 60 kW, 100 kW, and in some cases even higher. At those levels, air cooling becomes inefficient, energy-intensive, and increasingly unreliable.
Why AI breaks traditional cooling models
Most data centres today are designed around air cooling, with rack densities averaging roughly 15 kilowatts. That model is breaking down. Industry projections show AI-driven environments reaching between 60 and 120 kilowatts per rack, largely due to power-hungry GPUs and specialised AI processors.
At these densities, air cooling struggles to remove heat fast enough. Excess heat not only limits performance but also increases the risk of hardware failures and shortens equipment lifespan. For operators running LLM workloads, maintaining stable temperatures is no longer optional, it is essential to reliability.
The shift to direct-to-chip liquid cooling
Direct-to-chip liquid cooling is emerging as a practical solution to this thermal challenge. Instead of trying to cool an entire room or rack with air, liquid is delivered directly to the hottest components, such as GPUs and AI accelerators, where heat is generated most intensely.
Liquid has a major physical advantage: it absorbs heat far more efficiently than air, by orders of magnitude. This makes it particularly suited for dense AI environments where precision cooling matters.
Compared with conventional approaches, direct-to-chip liquid cooling offers several clear benefits:
- Higher efficiency: Liquid cooling systems consume less energy to remove the same amount of heat, improving overall data centre efficiency.
- Sustained performance: Processors can operate at peak capacity without thermal throttling, even at very high wattage levels.
- Improved safety: Modern systems use water-based or non-conductive fluids, reducing risk to equipment and personnel.
- Lower environmental impact: By reducing reliance on energy-intensive chillers and air-handling units, liquid cooling can cut emissions, noise, and power consumption.
By targeting heat at its source, these systems keep AI processors within optimal temperature ranges, improving stability and extending hardware life.
Schneider Electric’s approach to AI thermal management
Schneider Electric, through its Motivair portfolio, has been developing liquid cooling technologies tailored for AI and HPC workloads.
Its Coolant Distribution Units (CDUs) are designed to circulate coolant efficiently across data centres, supporting cooling capacities ranging from 105 kilowatts up to 2.3 megawatts, suitable for large-scale AI deployments.
The ChilledDoor® rear-door heat exchangers offer another option, using liquid-cooled air to remove heat directly from racks. These systems can handle loads of up to 75 kilowatts, making them useful for hybrid cooling environments.
For direct processor cooling, Dynamic® cold plates are built to manage extreme thermal outputs, supporting processors with heat loads exceeding 1,500 watts. They are compatible with major chip platforms from AMD, NVIDIA, Intel, and custom silicon designs.
Supporting this setup are in-rack stainless-steel manifolds, which distribute coolant between CDUs and cold plates, ensuring stable and integrated liquid flow across racks.
At the facility level, Schneider Electric also offers oil-free centrifugal chillers, available in air-cooled, water-cooled, and free-cooling configurations. These systems provide up to 2.5 megawatts of cooling capacity and support higher operating temperatures up to 33°C, helping data centres reduce energy use and emissions.
As AI models grow larger and more complex, thermal management will increasingly shape how and where AI infrastructure is deployed. Direct-to-chip liquid cooling is moving from an emerging option to a foundational requirement for next-generation AI and HPC systems.
