Speed has become the defining currency of the AI era. But in the race to deploy faster, scale bigger, and power increasingly complex workloads, one question is quietly becoming critical: does the infrastructure actually perform as intended when it matters most?
Building data centers is no longer the challenge. Proving they work under real conditions, at full load, without failure, is where the stakes now lie.
As digital infrastructure grows more complex, operational readiness is shifting from a final checkpoint to a foundational discipline. It is no longer enough to deliver capacity. Infrastructure must be validated, stress-tested, and aligned as a complete system before it ever goes live.
In Part 3 of this series, we turn to a leader focused on exactly this inflection point where design meets reality, and where execution determines whether infrastructure delivers on its promise. This feature explores Louis Charlton’s perspective on the evolving role of commissioning, the rising cost of getting it wrong, and why readiness is fast becoming the industry’s most underappreciated competitive advantage.
As the Top 10 unfolds, one insight continues to sharpen: in the age of AI-scale infrastructure, success is not defined by what is built, but by what reliably performs.
Executive Profile
Louis Charlton
CEO, Global Commissioning
Commissioning, Infrastructure Readiness & Operational Integrity
As we continue Compute Forecast’s Top 10 Impactful Players in Data Infrastructure series, we spotlight Louis Charlton, a leader operating at one of the most critical yet often underestimated layers of digital infrastructure, commissioning and operational readiness.
With a career deeply rooted in mission-critical environments, Louis has built a reputation for redefining how data centres transition from construction to fully validated, high-performance systems. His work centers on ensuring that infrastructure does not just get delivered, but performs exactly as intended under real-world conditions, an imperative that has become increasingly vital in the AI era.
As CEO of Global Commissioning, Louis leads a practice focused on integrating commissioning into the earliest stages of infrastructure development. His approach challenges the industry’s traditional view of commissioning as a final checkpoint, instead positioning it as a continuous, design-led discipline that underpins reliability, efficiency, and long-term operational confidence.
What sets Louis apart is his systems-level thinking. He views data centres not as isolated components, but as tightly coupled environments where electrical, mechanical, and control systems must operate as a unified whole.
As AI workloads push density, complexity, and performance expectations to new extremes, Louis’s perspective highlights a fundamental shift: operational readiness is no longer a downstream activity. It is a strategic foundation that determines whether infrastructure can deliver at scale, without compromise.
At the Point of Proof with Louis
This conversation captures Louis Charlton’s unfiltered perspective on the moment infrastructure moves from theory to reality, and what it takes to get it right.
Q1. How do you define “true operational readiness” for modern AI-ready data centers, and how has your work helped set new benchmarks for this standard?
True operational readiness means the facility performs exactly as designed, under real load conditions, with every system: electrical, mechanical, controls, and cooling, verified not in isolation but as an integrated whole. It means the operations team can take ownership with confidence, not hope.
The problem is that the industry has historically treated commissioning as a checkbox at the end of construction. A punchlist exercise. That was insufficient for traditional enterprise data centers, and it is completely inadequate for AI-ready infrastructure, where power densities are multiples of what they were five years ago, cooling architectures are fundamentally different, and the cost of a single failure event can run into tens of millions.
What we have done at Global Commissioning is redefine where operational readiness begins. It does not begin at handover. It begins at design, and if it doesn’t, I don’t believe the process has been followed correctly. We embed commissioning logic into the design review process, so that by the time a facility reaches integrated systems testing, we are validating what was intended, not discovering what was missed. That shift, from reactive verification to proactive assurance, is what sets a genuine benchmark. It means the operator receives a facility with an evidence base, not just a certificate.
Q2. Commissioning is often the final step before a data center goes live, but also one of the most critical. How has your work influenced the way operators approach infrastructure readiness and go-live confidence at scale?
The characterizationof commissioning as the “final step” is itself part of the problem. When it sits at the end of the programme, it absorbs every delay, every design compromise, every coordination failure that preceded it. Commissioning becomes the shock absorber for the entire delivery chain, and then the industry wonders why go-live confidence is inconsistent.
Our influence has been to challenge that sequencing. We work with operators and developers to move commissioning forward, into procurement, design, and factory acceptance. When you do that, go-live confidence is not something you hope for in the final weeks. It is something you build methodically across the entire delivery programme.
At scale, this matters enormously. A single campus might represent billions in capital investment. The commissioning approach needs to match that scale of investment, not still be operating on logic designed for a single-building deployment. We have helped operators move from treating commissioning as a procurement line item to treating it as a strategic function. A one that directly protects the capital they have deployed and the revenue that depends on that infrastructure being available on day one.
Q3. In your experience, what are the most critical infrastructure risks that are often overlooked before commissioning, and how has your approach helped prevent them at scale?
The risks that cause the most damage are rarely exotic. They are coordination failures. A control sequence that was specified one way, built another, and never tested under the conditions it will actually face. Switchgear that passed factory acceptance but was never verified against the site-specific protection coordination study. Cooling systems are commissioned to design conditions that no longer reflect the actual IT load profile.
These are not edge cases. They are systemic. And they persist because the industry still treats QAQC and commissioning as two separate workstreams, often delivered by three separate organizationswith no shared data platform and no integrated assurance logic.
Our approach is to unify these functions. QAQC and commissioning must operate as one system, not sequentially, but concurrently. When a piece of equipment is manufactured, the assurance process should begin at the factory, carry through installation, and be completed at integrated systems testing with full traceability. That eliminates the gaps where risk hides. It also means that when we arrive at commissioning, we are not discovering problems for the first time. We are confirming that the problems were prevented upstream, because we were there upstream.
Q4. One of the biggest challenges in data center delivery is the gap between design intent and operational reality. How has your work helped bridge this gap during commissioning?
The gap between design intent and operational reality exists because commissioning has historically been excluded from the design process. An engineer designs a system. A contractor builds it. A commissioning agent arrives at the end and tests it. If the design contained assumptions that do not hold in the field, and they often do, you discover that at the worst possible moment: when you are trying to go live.
We close that gap by being present at design. Not as a design consultant, that is not our role, but as the party that will ultimately have to prove the facility works. That perspective changes the conversation. We ask questions during design review that are rooted in what we know from commissioning hundreds of facilities: Will this sequence actually perform under partial load? Has the redundancy been tested under boundary conditions, not just at steady state? Can this system be maintained without taking down a concurrent feed?
When those questions are asked early, the answers get built into the design. When they are asked late, they become variations, delays, and risks. The design-to-operations gap is not a mystery. It is a direct consequence of excluding the people who prove the design works from the process of creating it.
Q5. With increasing rack densities and advanced cooling technologies, what new challenges are emerging during commissioning and how are you addressing them?
The shift to high-density AI compute, 50kW, 100kW, and beyond per rack, has fundamentally changed the commissioning challenge. Traditional air-cooled facilities had well-understood commissioning frameworks. You could point to ASHRAE guidelines, established test procedures, and decades of operational data. That foundation does not yet exist for direct liquid cooling, rear-door heat exchangers, or immersion systems at scale.
Liquid cooling, in particular, is not just a cooling decision. It is a commissioning decision. You are introducing fluid systems into the IT space, with leak detection, flow balancing, water treatment, and thermal management requirements that demand entirely new verification protocols. The standards bodies are catching up, but the deployment pace has outstripped the standards development cycle. That means the commissioning authority has to fill the gap with engineering rigour and built experience, not wait for a published guideline.
We are addressing this by developing commissioning methodologies specific to these technologies, drawing on what we see across multiple hyperscale programmes, not just one. That cross-programme visibility is critical because no single operator has yet established a mature operational baseline for liquid-cooled AI infrastructure at scale. We are helping to build that baseline, one verified deployment at a time.
Q6. Looking back at some of the most complex projects you’ve been involved in, what key lessons have fundamentally changed how you approach commissioning today?
Three lessons stand out, and they all point in the same direction.
First, timeline compression kills quality if you let it. Every complex programme faces schedule pressure. The lesson is not to resist that pressure; it is to build an assurance process robust enough to absorb it without cutting corners. That means front-loading the work: getting commissioning logic embedded early, running factory acceptance properly, and ensuring that by the time you reach the site, the unknowns have been reduced to a manageable number. If commissioning is the first time you stress-test the design decisions, you have already lost the programme.
Second, people matter more than procedures. The best commissioning methodology in the world fails if it is executed by an inconsistent workforce, a rotating cast of freelancers with no shared standards, no institutional knowledge, and no accountability beyond the current contract. We made the decision early to build a permanent team, invest in their development, and deploy them consistently. That decision has been the single biggest driver of quality and repeatability in our work.
Third, evidence is everything. Opinions do not protect capital, test results do. Every facility we commission produces a complete evidence-based, not a summary report, but a traceable, auditable record that proves the infrastructure performs as designed. That evidence base is what gives operators, investors, and insurers genuine confidence. It is the difference between a facility that was commissioned and a facility that was proven.
Louis Charlton: Where Performance Gets Proven
Louis Charlton’s role in this series, reflects a shift the industry can no longer ignore, infrastructure must be proven, not presumed. In an era defined by AI-scale demand, certainty at go-live is becoming as critical as design itself. His perspective brings clarity to a growing blind spot. As systems grow more complex, the real risk is no longer in what is built, but in what is left unverified. Commissioning, in this context, becomes the discipline that closes that gap. What sets his approach apart is its timing. By embedding readiness early, not late, he reframes execution as a continuous process rather than a final event. The result is infrastructure that performs with intent, not assumption.
As this series moves forward, one principle stands firm: the future will favor leaders who ensure infrastructure delivers exactly as designed, every time, under real conditions.
