As AI accelerates infrastructure demand, power delivery is becoming the defining constraint of scale.
The conversation around data infrastructure is no longer centered on compute alone. As workloads grow denser and deployment cycles shorten, the ability to deliver stable, resilient, and high-capacity power within the data center has emerged as one of the most critical challenges in the industry. Infrastructure today must not only support performance it must sustain it under continuous, high-intensity demand.
What was once treated as a backend engineering function is now a strategic layer of infrastructure design. Power systems are no longer passive enablers; they actively determine how far and how fast data centers can scale. Reliability, integration, and speed of deployment have become non-negotiable, especially as operators race to support AI-driven workloads.
As this leadership series progresses, we turn to a leader operating at the heart of this transformation, where electrical infrastructure meets execution at scale. His work reflects a broader shift in the industry: from fragmented system delivery to fully integrated, deployment-ready power architecture.
As our “Top 10 Impactful Players in Data Infrastructure” unfolds, a clear pattern is emerging: infrastructure leadership today is defined not just by vision, but by the ability to translate complexity into systems that perform reliably in real-world conditions. This feature explores Michael Beagan’s perspective on delivering mission-critical power infrastructure, the challenges of supporting next-generation data centers, and the role of execution in defining infrastructure success.
Executive Profile
Michael Beagan
Managing Director, TES Group
As Compute Forecast continues its Top 10 Impactful Players in Data Infrastructure series, we spotlight Michael Beagan, a leader whose work underpins one of the most essential yet often underappreciated layers of modern data centers, power infrastructure.
As Managing Director at TES Power, Michael Beagan stands at the forefront of delivering the critical power systems that define how modern data centres are built, scaled, and sustained. His role centers on ensuring that power systems within data centers are not only designed to specification, but integrated, tested, and deployed in a way that guarantees performance under real operational conditions.
With a career rooted in mission-critical environments, Michael brings a deep understanding of how electrical systems must function as part of a larger, tightly coordinated infrastructure ecosystem. His work spans the delivery of complex power distribution architectures, modular electrical solutions, and high-reliability systems that enable data centers to operate without interruption, even under increasing load and operational pressure.
What distinguishes Michael’s contribution is his focus on execution as a strategic discipline. In an industry where timelines are compressed and failure is not an option, his approach emphasizes precision, integration, and accountability across every stage of delivery. This includes aligning engineering, manufacturing, and on-site implementation into a cohesive process that reduces risk and accelerates deployment.
Beyond delivery, his influence extends into how the industry views infrastructure readiness. By advancing integrated and modular approaches to power deployment, he contributes to a broader evolution in how data centers are designed and executed, moving toward systems that are faster to deploy, easier to scale, and more reliable in operation.
This feature explores Michael Beagan’s perspective on the realities of delivering power infrastructure at scale, the challenges of supporting AI-ready environments, and the role of system-level thinking in shaping the next phase of data infrastructure.
Straight out of the Bag with Michael Beagan
This conversation captures Michael Beagan’s unfiltered perspective on the moment infrastructure moves from theory to reality, and what it takes to get it right.
Executive Q& A – Michael Beagan, Managing Director TES Group
Category: Critical Power Infrastructure Data Center Enablement
Q1. TES operates at the core of data center electrical systems. How has your work directly influenced the way power is designed, distributed, and managed within modern hyperscale facilities?
Meeting the demands of modern data center growth means learning to design and build for the AI age. The scale of AI demand and the speed at which it needs to be delivered is changing where, how, and how fast digital infrastructure is built. This is a good moment to talk about modular, pre-engineered assemblies and what they offer. At their core, they deliver three holy grails: speed, quality, and reliability, along with the ability to scale quickly when demand increases.
As the AI buildout drives the need for larger and more complex facilities, sometimes in more remote locations (AI has a higher tolerance for latency, so operators are free to look beyond Tier 1 markets for cheap land, renewable power, and other benefits) without an established data center ecosystem, the industry can’t keep relying on traditional supply chains, procurement models, and construction methods. Approaches that worked well for digital infrastructure projects geared towards cloud and colocation applications are less suited to the AI age of scale, speed, and insatiable demand for power.
Modular offsite construction, which really got stress-tested as an approach during the COVID-19 pandemic, is a strong alternative. What started as a response to labour shortages and supply chain disruption has matured into a factory-based approach to manufacturing and assembly. This allows for tighter quality control, reduced reliance on on-site labour, and the ability to test systems earlier in controlled environments. Modules can undergo Level 1 and Level 2 testing before they are shipped, which helps shorten project timelines and reduce risks.
This approach becomes even more valuable as developers look to places like Northern Europe, where cooler climates and access to renewable energy are attractive, but local construction ecosystems may be limited. Modularization reduces on-site complexity and the need for large specialist teams, making delivery more practical in these areas. More broadly, this shift highlights a fundamental change in how data centers are built. Traditionally, construction involved multiple trades working simultaneously on-site, often leading to coordination challenges. Today, the process is increasingly moving toward a manufacturing model, where components are produced in controlled factory environments designed for consistency and precision. This means higher quality outcomes, improved resilience, and a more scalable delivery model. It also makes it easier to deliver projects at the pace demanded by the AI boom compared with traditional design and construction methods.
Q2. Power infrastructure increasingly needs to align with advanced cooling systems for high- density environments. How are you addressing this integration within modern data center design?
Most cooling investment (and discussion) is focused on IT equipment right now, as AI pushes rack densities into the stratosphere. The knock-on effect of this is that electrical systems like switchgear also operate at higher ambient temperatures than before. That makes thermal performance a key consideration for a power company like TES, even if we don’t cool the servers ourselves. While we are not directly involved in designing cooling systems, we obviously play a role in ensuring those systems are reliably powered and properly integrated. Switchgear receives less attention in cooling strategies, as the priority and budget are focused on IT equipment, so being able to design our modules and the equipment in them to work reliably at higher temperatures is going to have a net positive effect on the efficiency of the whole facility.
AI is making that into an interesting design challenge, especially when you look at the ways AI workloads differ from a more traditional IT footprint. AI-driven IT loads tend to fluctuate, spiking up and down rather than following a steady pattern. During peak demand, switchgear can heat up dramatically as the servers go from idle to full load and back again in a matter of seconds. It’s not as simple as following the IT load spikes, though: the temperature response is not immediate or linear. When current spikes, temperature rises more gradually, and by the time it begins to climb, the IT load may already be decreasing. This creates subtle but important thermal fluctuations at the system level.
These variations can affect components like electrical connections and materials, where repeated heating and cooling lead to expansion and contraction, particularly in copper. At the same time, modern mitigation technologies like supercapacitors help smooth out load spikes, allowing power demand to follow a more consistent and controlled profile. That consistency and control is vital because, as AI workloads put new kinds of stress on increasingly complex (and expensive) IT equipment, protecting against power and temperature spikes is going to be a critical part of component longevity going forward.
Q3. What is one widely accepted approach to data center power infrastructure that you believe needs to be fundamentally rethought as AI workloads scale?
One widely accepted approach that needs to be reconsidered as AI workloads scale is the traditional “N+1 everywhere” model for resilience in data center power infrastructure. In the past, people have made it standard practice to build redundancy across nearly every system to ensure maximum uptime. That mindset made sense when data centers were smaller and less complex. With the sheer scale of modern AI infrastructure, applying blanket redundancy to every component is starting to get physically impractical and prohibitively expensive.
Something has to change, but that doesn’t mean throwing out three decades’ focus on reliable digital infrastructure services. Going forward, the more effective approach is going to be finding the critical path where resilience really matters and protecting that. Not every system requires the same level of redundancy. For example, UPS systems are essential for maintaining critical operations. They can cost millions of pounds per power stream. It’s about putting your money where it provides the most redundancy to the critical power path, so extending that same level of redundancy to all supporting or non-critical systems quickly stops making sense.
There’s going to be a shift toward more targeted resilience. The goal is still to protect against failure, but with a more strategic allocation of resources. Critical systems should remain highly redundant, while less essential components can tolerate lower levels of protection if their failure doesn’t seriously (or immediately) impact the overall operation of the facility. That said, this is an evolving trend and a big mindset shift for an industry that’s been building N+1 everywhere for years. Figuring out where to draw the line between critical and non-critical systems will likely involve some degree of trial and error. It’s just a matter of making sure the lessons aren’t too painful when we learn them.
Q4. What is one overlooked risk in current data center infrastructure design that could
One overlooked risk is how quickly AI hardware is evolving compared to how slowly data centers are designed and built. From the moment you start building a data center to the day the IT equipment is switched on, the process can take several years. But AI chipsets and infrastructure needs are evolving so quickly that the original design may no longer match what the technology requires by the time the facility is ready. That means data centers being built today could end up optimised for hardware that’s already outdated by the time they go live.
This is very different from the past. Traditional workloads were more predictable and stable for long periods. Infrastructure could be designed with relative confidence that it would remain relevant for years. With AI, even a five-year horizon feels uncertain. Compute density, power requirements, and cooling needs are all changing really, really quickly. The industry is still learning what fully AI-native operations will demand at scale. Facilities coming online today were designed before the full impact of AI workloads was understood. Very few sites are running entirely AI-driven environments, so some risks are still theoretical. The next wave of deployments will reveal where assumptions fall short, particularly around power management and infrastructure resilience.
The key issue underneath all of this is flexibility. If adaptability is not built into the design from day one, operators may struggle to respond to changing requirements without costly redesigns or delays. That is why modular approaches are becoming more important. For example, modular electrical systems and switchgear allow components to be reconfigured, upgraded, or expanded without starting from scratch. Practically, going forward we’ll need to treat infrastructure less like a fixed asset and more like a system with a tendency to evolve. Modular designs that allow for later-stage changes, scalable capacity, and rapid reconfiguration will be far better positioned to keep pace with the rate of change we’re seeing in space.
Q5. You have been involved in delivering power infrastructure for large-scale data center campuses. What lessons from these deployments are shaping how future AI-ready facilities are being designed?
Standardization is the key – the be-all and end-all – of doing anything at scale. That’s especially true when it comes to delivering digital infrastructure, and much harder to achieve than many expect. Most operators would probably say they have a standard design, but when you look a little closer, each site often has its own variations. That makes it difficult to scale efficiently. Real standardization means creating a design that can be repeated consistently across multiple locations and even manufactured by different suppliers without major changes. This becomes especially important with AI, where speed to deployment is critical. Operators want to bring capacity online quickly, sometimes across several sites at once. A standardized, modular approach allows equipment to be built and allocated where it’s needed most. If priorities shift or project timeline changes, that same equipment can be redirected to another site without starting over.
Modular manufacturing is central to standardizing across these large infrastructure projects. By breaking large, complex projects into smaller, repeatable building blocks, it becomes much easier to scale and adapt. If a change is required, it can be made to one module and then applied across the rest of the system. This is far more efficient than reworking one big, monolithic design.
As I said, the industry is still figuring out what “standard” looks like for AI-ready facilities. Requirements are evolving quickly, and there is no fixed blueprint yet. Lots of people in this Industry feels like we’re laying the tracks while the train is moving. Modular construction helps bridge that uncertainty by giving a better path towards consistency while still creating room for the unexpected. In practice, the combination of standardization and modularity gives operators a good mix of speed and flexibility. It lets them scale across multiple sites, respond to shifting demands, and refine designs over time without disrupting entire projects.
Q6. Looking ahead, how do you see power infrastructure evolving within AI-driven data centers, and what role has your work played in shaping that direction?
It’s clear to us that modular is becoming the default approach. In the past, modular was useful for niche or remote deployments. Now, the demands of the AI boom mean those practices are being applied to large-scale facilities and even entire campuses. It’s an understandable response to the need for speed, standardization, and flexibility while scaling. AI requirements are going to continue to evolve. So are the ways we design and build AI data centers. Over time, building a data center is likely to feel less like a traditional construction project and more like an assembled system, where major components are manufactured, delivered, and put together on site.
Another important development is the growing role of energy storage and power stabilization technologies. Like I said, AI workloads can create sharp fluctuations in demand, which puts pressure on electrical infrastructure. To manage this, power system design is heading towards including multiple layers of support, using battery energy storage systems and supercapacitors as buffers, helping to smooth out spikes and keep power stable across the whole network.
This means power systems will need to become more dynamic. Instead of supporting steady, predictable loads, they will be designed to respond in real time to changing conditions. That calls for smarter control systems and more integrated architectures that can manage variability without compromising performance. Our work has helped push this direction by focusing on modular, reconfigurable systems that can evolve alongside changing requirements. By designing infrastructure in smaller, adaptable units, it becomes easier to upgrade, expand, or modify without disrupting the entire system. That approach supports both the scale and the uncertainty that comes with AI.
The future points toward data centers that are manufactured, not just built, with power infrastructure that is layered, flexible, and ready to handle the unique demands of AI at scale.
Powering What Comes Next
Michael Beagan’s inclusion in the Top 10 Impactful Players in Data Infrastructure reflects a leadership approach grounded in execution, precision, and system-level accountability. At a time when AI is redefining the limits of infrastructure, his work underscores a critical reality: performance is only as strong as the power systems that sustain it.As data centers evolve into high-density, always-on environments, the tolerance for inefficiency or failure continues to shrink. Power infrastructure is no longer a background layer, it is a determining factor in whether facilities can operate reliably at scale. The ability to deliver integrated, resilient systems is quickly becoming one of the industry’s most important competitive advantages.
Michael Beagan stands firmly in that category, enabling not just the expansion of infrastructure, but the confidence to operate it at scale.
