‘Spray cooling’ is gaining traction as AI workloads push power and thermal limits far beyond what traditional facilities were designed to handle. Rack densities that once averaged 5 to 10 kilowatts are now exceeding 100 kilowatts, with megawatt-scale designs already being planned. As a result, cooling systems are being re-evaluated as a foundational constraint on AI growth, not a secondary consideration.
At the same time, data center architectures are evolving toward higher-voltage designs, including 800-volt systems, to support denser compute environments. However, while power delivery continues to advance, air cooling has largely been outpaced by the thermal demands of modern AI chips. Many liquid-cooling alternatives exist, but they often require infrastructure changes that legacy facilities struggle to absorb. Against this backdrop, spray cooling for AI data centers is being positioned as a more flexible middle ground.
Rack-Level Spray Cooling Targets AI Density Challenges
Airsys Cooling Technologies recently introduced its PowerOne platform, which includes a rack-level spray-cooling system known as LiquidRack. The design focuses on AI workloads that generate sustained, high heat loads and require consistent performance. Instead of using cold plates or submerging servers in fluid, a dielectric coolant is sprayed directly onto the processors, allowing heat to be captured at the source.
The warmed coolant is then routed through plate heat exchangers, where thermal energy is transferred into a water loop. From there, heat is expelled using a dry cooler. Notably, compressors and centralized coolant distribution units are not required, which reduces system complexity. This architecture allows rack densities of up to 170 kilowatts when standard chilled water temperatures are used.
In comments previously shared with industry publications, Tony Fischels, vice president of PowerOne at Airsys, described the system as immersion-ready while remaining fully enclosed at the rack level. Each rack contains 10 independent cassettes, each equipped with its own pump and heat exchanger. As a result, coolant is continuously collected and recirculated without leaving the rack boundary.
Why Spray Cooling for AI Data Centers Appeals to Legacy Facilities
Beyond thermal performance, spray cooling for AI data centers is being adopted for its infrastructure efficiency. Because compressors are eliminated, significantly less mechanical equipment is required on-site. This reduction is especially relevant for older data centers where floor space and ceiling height are limited.
In addition, power infrastructure can be used more efficiently. In conventional designs, compressors are sized for peak summer temperatures but remain underutilized for much of the year. Nevertheless, generators, switchgear, and distribution systems must still be built to support those peak loads. By removing compressors from the cooling loop, stranded power capacity is unlocked.
As has been noted in public remarks by Airsys executives, that reclaimed capacity can be redirected toward AI compute or used to shrink the overall electrical footprint of the facility. Either outcome aligns with a central objective of AI infrastructure design: increasing usable compute without proportionally increasing energy overhead.
Spray cooling occupies a distinct position between air cooling and full immersion systems. While immersion platforms can exceed 200 kilowatts per rack, they often demand greater structural changes. For many AI operators, especially those retrofitting existing sites, spray cooling for AI data centers offers a practical balance between performance, cost, and deployability.
As AI continues to drive data center evolution, cooling strategies are no longer being treated as secondary engineering choices. Instead, they are being recognized as strategic enablers of scalable, energy-efficient AI infrastructure.
