The data center industry is scrambling to accommodate a wave of demand unlike anything we’ve seen before.
According to research from Goldman Sachs, current global data center capacity demand sits around 62GW. Cloud workloads account for 58%, traditional enterprise workloads 29%, and AI workloads 13%. By next year, however, AI is on track to represent around 28% of total demand, while cloud drops to 50% and traditional workloads to 21%. The five largest US hyperscale technology companies are expected to spend a combined $736 billion (slightly higher than the GDP of Belgium, to put that into perspective) across 2025 and 2026.
As data center builders race to support demand for AI workloads, we’re seeing a sea change in the ways these organizations approach site selection and data center design. In the AI age, securing access to power, not to mention managing that power within a data center facility, are posing new questions. Questions without simple answers. Questions that are sending ripples through the data center supply chain.
Site selecting around stumbling blocks in the AI race
For decades, site selection in Europe has revolved around land availability, fibre connectivity and proximity to customers. When you’re hosting responsive cloud services and streaming platforms, it pays to be as close as possible to the end user. Today, power has overtaken physical footprint as the decisive factor. Latency? Doesn’t matter as much in the age of AI.
These changing priorities are due primarily to the fact that, across much of Europe, grid investment has lagged behind rising electrification demands for decades. AI deployments mean higher density racks (by an order of magnitude compared with a traditional colocation site or carrier hotel), and developers are facing the reality that available megawatts, not square meters, are what make a site viable. Some operators are being forced to leave entire floors of new facilities empty, not for lack of demand (far, far from it), but because the grid connections needed to meet that demand are insufficient.
In order to secure that access to power (as well as other bonuses like lower ambient temperatures that allow for free cooling), data center operators are starting to cast their eyes farther afield.
Luckily, AI workloads are less latency-sensitive than traditional cloud applications like streaming or payments infrastructure. That fact has opened up the possibility of building more facilities outside the saturated FLAP-D markets.
That being said, relocating outside established regions introduces its own pain points: under-industrialized areas with less mature construction ecosystems, transport bottlenecks, and shortages of skilled labour. Solving one problem creates several more.
AI power loads pose a “spiky” problem
Clearly, the next generation of data centers built to support AI workloads will not only need more power coming in but will need to be designed in such a way that handles said power very differently once it’s in the facility.
At the core of the issue is power management and the difference between managing an AI workload versus a more traditional cloud or colocation one.
AI server clusters are built around high-density GPU arrays. When used to train or inference AI, these racks behave very differently to conventional workloads. Loads ramp from idle to full capacity in a few seconds, then drop off again just as quickly. Also, because deep learning systems are fundamentally a black box, predicting these jumps is challenging. These “spiky” load profiles introduce issues throughout power systems, and concerns are emerging over premature ageing of UPS batteries, overstressing of UPS systems themselves, transformer fatigue, and potential impacts on low-voltage switchgear protection modules.
Mitigation strategies are already emerging. Some projects are evaluating supercapacitors and battery energy storage systems positioned upstream of UPS installations to buffer volatility and smooth load transitions before they propagate through the network.
Building for the unknown
There’s a problem, however: the first generation of “AI native” data centers is still being built. The facilities of tomorrow are still holes in the ground awaiting concrete, or metal skeletons swarming with (very well-paid) construction crews. Facilities coming online today were designed over 12 months ago, when the challenges AI workloads pose were less well understood. Very few sites today are actually running 100% AI workloads, and it’s difficult to predict exactly which problems will arise when the rubber inevitably meets the road.
For now, many of the risks are still theoretical, and the next wave of facilities will test assumptions in real-world conditions. If the AI boom continues, and these power management issues aren’t resolved, it could present a major hurdle for the sector. Or it might be more akin to Y2K and never materialize into the doomsday scenario many engineers spent the late 90s dreading.
It’s worth noting that, while Y2K didn’t result in a digital apocalypse, the world was only spared disaster by the efforts of skilled engineers working very hard behind the scenes to fix the issue before the clocks ticked over into 2000. With the looming power problems of the AI boom, the data center sector could be facing its own Y2K. Whether or not the rest of the world notices or not will be down to the steps taken today.
Industrialising the build process
If meeting the demands of the AI buildout means building bigger, more complex facilities in places that are more remote without an established data center industry, then the sector can’t expect the same supply chain, procurement, construction, and design techniques to be successful as on a more traditional project. Modular offsite construction already came to maturity during the COVID-19 pandemic, and may now offer a solution to the demands of the AI boom.
What began as a response to labour shortages and supply chain disruption has matured into modular, factory-based manufacturing and assembly solutions that enable tighter quality control, reduced on-site labour requirements, and earlier-stage testing under controlled conditions. Modules can undergo Levels 1 and 2 testing before shipment. This approach shortens timelines and mitigates labour constraints in remote or less mature markets.
As developers explore sites further north in Europe (attracted by cooler climates or proximity to renewables) modularization becomes even more critical. Regions with limited construction ecosystems benefit from reduced on-site complexity and smaller specialist crews.
Operators are increasingly ring-fencing manufacturing capacity years in advance, securing production slots for 2027, 2028 and beyond. The rising popularity of OFCI procurement reflects a broader desire for control. Rather than relying on contractors to source key equipment, owners are purchasing directly from manufacturers, establishing long-term supplier relationships and safeguarding continuity of supply. Installation is then executed by contractors, but ownership of the procurement pipeline remains with the client.
Speed and agility will separate the winners from the losers
A data center conceived today using conventional timelines may be technologically outdated by the time it becomes operational.
Industrializing the build process by leveraging techniques like modularization, prefabrication and parallelized workflows is becoming essential to delivering this new generation of AI projects. From concept to client move-in day, timelines are getting shorter. Speed is getting more important.
Agility is also vital. With the sector entering a period of intense iteration, we can expect to see designs evolve rapidly over relatively short intervals. Version 2, 3 and 4 architectures will follow in quick succession. What is state-of-the-art today may look conservative within 18 months. The only reliable assumption is that compute density will increase and power demands will intensify. Beyond that, it’s anybody’s guess.
In this environment, larger, less flexible organizations may struggle to respond at the required pace. Smaller, boutique manufacturers with the ability to pivot faster and later in the project’s timeframe will be the ones delivering a critical competitive advantage.
A generational Infrastructure Challenge
The AI boom is often framed as a software revolution. Or a productivity revolution. Or as the unstoppable rise of the machines. Take your pick.
The reality is that no revolution of any kind will be taking place without the necessary infrastructural foundation. The challenge is not only getting significantly more power into sites, but also distributing that power within facilities in a way that accounts for new kinds of workloads without burning out switchgear, transformers, or a multi-million dollar rack of GPUs.
The next generation of data center projects will be defined by their ability to absorb, manage and optimize unprecedented power densities. They will be defined by where they are built, how they are designed, and how quickly those designs are delivered.
Whether the industry experiences disruption or navigates these hurdles will depend on the decisions being made now in design offices, and on factory floors. We’re well and truly in the Age of AI now. The winners of this race won’t be the ones who simply build bigger. It’ll be the ones who build smarter, more agile, and hand in glove with the right partners.
Media Contact – Emma Rigby : [email protected]
