The Gigawatt Campus Problem: Why the Biggest AI Infrastructure Projects Keep Running Into the Same Walls

Share the Post:
gigawatt campus problem AI infrastructure projects stalling anchor tenant 2026

The AI infrastructure buildout has produced a category of project that did not exist five years ago: the gigawatt campus. These are not data centers in any conventional sense. They are vertically integrated power and compute platforms, covering thousands of acres, targeting power capacities equivalent to medium-sized cities, and requiring construction timelines measured in years rather than months. The ambition behind them is genuine. The execution record, however, is becoming a pattern worth examining carefully.

Fermi America’s Project Matador stalled without an anchor tenant and lost its CEO and CFO within days of each other. Oracle and OpenAI scrapped their Abilene expansion. Applied Digital’s Delta Forge 1 secured its $7.5 billion lease but only after months of delays. The pattern across these projects is not random. It reflects structural constraints that operate at gigawatt scale in ways that simply do not apply at smaller project sizes.

Why the First Wave Got It Wrong

The gigawatt campus as a concept emerged from a specific moment in AI history. When the explosion in AI investment made clear that compute demand would grow by orders of magnitude, infrastructure developers and their financial backers began extrapolating forward. If AI companies needed hundreds of megawatts in 2023, they would need gigawatts in 2025 and tens of gigawatts by 2030. That extrapolation was not wrong. What was wrong, however, was the assumption that the infrastructure required to serve that demand could be built using the same approaches and risk frameworks that served the data center industry for the prior two decades. Gigawatt-scale development is not a bigger version of conventional data center development. It is a structurally and operationally different activity that demands different approaches at every stage.

The speed with which the gigawatt campus category attracted capital made the underlying problem worse, not better. When companies fund projects based on vision before they fully understand operational challenges, the market creates incentives to present optimistic timelines and downplay structural risks. Investors and stakeholders are now judging the projects that companies took public in 2025 based on their announced scale, political connections, and the delivery timelines those announcements implied. The reckoning is not complete, but it is underway. Understanding what specifically went wrong, and why, is the essential first step toward a more disciplined second generation of gigawatt campus development.

Why the Conventional Development Model Breaks at This Scale

Standard data center development follows a logic the industry has refined over two decades. A developer secures land, obtains grid interconnection, builds out in phases tied to confirmed customer demand, and scales incrementally. Each phase funds the next. Risk is distributed across the timeline. No single decision point, in other words, carries existential weight.

Gigawatt campus development breaks this model at almost every step. Land at the required scale must be secured before customers are identified, meaning thousands of acres in locations with access to large power reserves. Grid interconnection for loads above 500 megawatts requires utility approvals and transmission upgrades that take years to execute. Consequently, the capital committed before a single customer signs a lease is orders of magnitude higher than in conventional development.

The Anchor Tenant Trap

The anchor tenant problem sits at the centre of this dynamic. A gigawatt campus developer needs a hyperscaler or a large AI lab to commit to substantial capacity before construction financing becomes viable and before the team can finalise critical design decisions. However, hyperscalers need to see credible construction progress, confirmed power access, and a clear path to operational readiness before they will sign a multi-year lease worth billions. Each party, in other words, is waiting for the other to move first.

That standoff is not a negotiating failure. Rather, it is a structural feature of the asset class, and it has derailed well-capitalised, genuinely ambitious projects that were, in some cases, politically well-connected. The public market context further compounds the financial pressure this dynamic creates. Fermi America went public in October 2025 raising $746 million before generating a dollar of revenue. That valuation reflected investor enthusiasm for AI infrastructure, not operational progress. The mismatch between public market expectations and delivery reality is now visible in stocks that have lost 50%, 75%, and in Fermi’s case more than 80% of their IPO value.

The anchor tenant problem also has a downstream effect on construction financing. Project finance lenders in the data center market have historically required offtake agreements, typically long-term leases from creditworthy tenants, before committing construction debt. At gigawatt scale, those requirements are even more stringent because the project size makes the lender’s exposure proportionally larger. A developer who cannot show a signed anchor tenant cannot access construction financing on reasonable terms, and a developer who cannot access construction financing cannot demonstrate the construction progress that would persuade a hyperscaler to sign. The circularity is structurally embedded in the financing model for this asset class.

The Cooling Supply Chain Is a Hidden Constraint

The anchor tenant dependency is the most visible constraint on gigawatt campus development. It is not, however, the only one. The cooling supply chain has emerged as a second structural barrier that projects at this scale are discovering late, often after significant capital has already been committed. Cooling at the density required for modern AI training infrastructure requires specialised equipment, fluid management systems, and facility engineering that the supply chain cannot currently deliver at gigawatt campus pace.

Lead times for large-scale cooling distribution units and custom manifolds are running 18 to 24 months at the vendors capable of building at this quality level. For a project aiming to bring 1 gigawatt online within 12 months of breaking ground, that timeline is incompatible. The maths simply does not work. Furthermore, the supply chain and the construction timeline are in direct conflict in ways that project developers consistently underestimated in the first wave of gigawatt campus announcements.

Where Developers Are Getting Caught Out

Fermi America’s CEO acknowledged this in his final interview before stepping down. He said he may have misunderstood where the supply chain was for the cooling equipment the project needed. That admission points to a broader industry gap: cooling complexity at gigawatt scale is genuinely different from anything data center developers have previously managed. At conventional data center scale, established vendor relationships and predictable lead times are the baseline. At gigawatt scale, none of those assumptions hold.

The cooling constraint also interacts with the anchor tenant problem in ways that compound both. A developer cannot finalise cooling system design without knowing what workloads the facility will support, and what workloads the facility will support depends on which tenants sign. Different hyperscalers have different cooling requirements based on their specific GPU configurations and rack densities. Consequently, the developer is caught between the need to start procurement to hit timeline targets and the inability to specify that infrastructure without a confirmed tenant. Projects that successfully navigated this either secured the tenant first or designed sufficient flexibility to accommodate multiple configurations.

The Workforce Constraint Nobody Is Talking About

Beyond the lead times for equipment, there is also a workforce constraint that has received even less attention than the supply chain itself. The engineering and construction workforce capable of deploying cooling infrastructure at gigawatt density is genuinely limited globally. The project managers, mechanical engineers, and specialist contractors who understand large-scale liquid cooling and immersion systems at this density are in short supply. Large projects are competing for the same pool of expertise, and the projects that committed earliest to their engineering teams paid accordingly. As a result, they have a workforce advantage that latecomers cannot quickly close regardless of the capital available to them.

The cooling workforce shortage is a microcosm of a broader talent gap in AI infrastructure construction. The skills required to build a gigawatt campus, from high-voltage electrical engineering to advanced thermal systems to the project management discipline needed to coordinate thousands of workers across a multi-year build, are not abundant. Traditional data center construction at 20 to 50 megawatts draws on a well-established contractor ecosystem with predictable labour availability. At gigawatt scale, that ecosystem does not exist in the same form. Developers are recruiting globally, paying significant premiums for specialist expertise, and in some cases building internal capabilities that would previously have been outsourced. That organisational investment adds to the pre-revenue capital burden of gigawatt campus development in ways that are not fully visible in project cost estimates.

Why the Power Interconnection Timeline Is Getting Longer, Not Shorter

The third structural constraint on gigawatt campus development is power interconnection, and unlike the first two, it is getting worse rather than better. Dominion Energy Virginia, which serves the highest concentration of data centers in the world, has disclosed wait times of up to seven years for loads above 100 megawatts. Lawrence Berkeley National Laboratory’s Queued Up analysis found that only 13% of capacity entering interconnection queues between 2000 and 2018 was ever actually built.

A 1 gigawatt facility represents the power demand of roughly 750,000 homes. No utility in the United States offers a standard interconnection process for loads at that level. Each project requires developers to negotiate transmission upgrades, substation construction, and cost allocation through custom agreements. That process can take years before utilities confirm a single megawatt of capacity. As we have covered in detail in our analysis of firm power vs flexible power in AI infrastructure, behind-the-meter generation avoids the grid queue but introduces its own permitting, fuel supply, and reliability complexity that does not disappear simply because developers bypass the grid queue.

The Equipment Supply Chain Compounds the Problem

The tariff environment has added another layer of uncertainty to power procurement for gigawatt campuses. Import tariffs on electrical equipment from China have significantly increased costs and lengthened lead times for transformers, switchgear, and other components manufactured primarily there. As we have covered in our analysis of the silent bottleneck of transformer and substation supply chains, the electrical equipment supply chain was already under severe pressure before tariffs added further strain.

The median time from interconnection request to commercial operation has reached five years. For developers accustomed to 18-month construction timelines, that is not an incremental complication. It is a structural change in how projects must be conceived, financed, and managed. Projects that treat interconnection as a procurement exercise rather than a multi-year strategic engagement are consistently the ones that discover this constraint too late.

Behind-the-Meter Generation as a Response, and Its Limits

Behind-the-meter power generation emerged as the primary strategy for gigawatt campus developers seeking to bypass the grid interconnection queue entirely. The logic is straightforward: if the grid queue is five to seven years, develop your own generation on-site and avoid the queue altogether. Several projects, including Project Matador, were built around this strategy, with natural gas as the primary fuel and nuclear power as a longer-term addition. The strategy is not inherently flawed. However, it introduces a different set of dependencies that are as challenging to manage as the grid interconnection queue it replaces.

Air permits for large-scale gas generation facilities take months to years to obtain and can be challenged by environmental groups and neighbouring communities. Fuel supply agreements for the volumes required at gigawatt scale require long-term contracts with significant financial commitments. And critically, behind-the-meter generation does not eliminate the anchor tenant problem. A developer still needs a confirmed tenant before securing project financing, regardless of whether the power comes from the grid or from on-site generation. In some respects, behind-the-meter generation makes the anchor tenant problem harder, not easier, because it adds a second complex dependency, the power system, that must be resolved alongside the first.

Why Queue Position Is Now a Strategic Asset

Queue position has consequently become a strategic asset with real economic value, and the developers who recognised this early built their site strategies around securing queue positions in attractive markets before the current wave of AI infrastructure investment made those positions scarce. A project that secured interconnection rights in 2021 is operating from a position of security that a project applying today cannot replicate regardless of its financial resources or political relationships. That structural advantage compounds over time and translates directly into faster commissioning timelines and lower financing costs.

What Separates the Projects That Actually Deliver

Not every large-scale AI infrastructure project is stalling. The Stargate campus in Abilene, operated by Crusoe, brought its first 200-megawatt phase online in approximately 13 months from the start of vertical construction. Applied Digital’s Delta Forge 1 secured its hyperscaler lease and is progressing toward commissioning. The difference between these projects and those that have stalled is not primarily capital, ambition, or political backing.

Rather, success depends on the presence of a confirmed anchor tenant before major construction commitments begin, combined with a grid-connected power strategy that does not depend on novel technologies or untested supply chain relationships. Both the Stargate campus and Applied Digitalโ€™s Delta Forge project are grid-connected, with established utility relationships and interconnection agreements built on confirmed power access rather than aspirational behind-the-meter timelines.

The Tenant-First Sequencing That Actually Works

The Stargate campus had OpenAI as its anchor tenant before construction began at scale. That confirmation allowed Crusoe to finalise cooling design, secure equipment with sufficient lead time, and obtain construction financing against contracted revenue. Applied Digital had Microsoft as its hyperscaler partner, which similarly allowed design decisions to be made against confirmed requirements rather than anticipated ones.

The lesson from these projects is not that gigawatt campus development is inherently unviable. It is, rather, that the development model used for conventional data centers does not transfer to this scale. As we have covered in our analysis of the 5-year wait problem in AI infrastructure lead times, the operators who understand this structure their development processes accordingly. Those who do not are discovering the constraints operationally, at a cost measured in both capital and credibility.

For investors evaluating the gigawatt campus category, the operational differentiation between projects that are delivering and those that are stalling has significant implications for valuation. The projects succeeding share specific structural characteristics: confirmed anchor tenants, grid-connected power strategies with established utility relationships, and engineering partnerships that include specialised cooling expertise at this density. The projects struggling share a different profile: they went to market on ambition before locking down the structural foundations. The market is now pricing this distinction, and the gap between the two groups is widening.

Insurance and Risk Underwriting Are Also Shifting

The risk assessment frameworks that insurers and infrastructure lenders apply to gigawatt campus projects are evolving rapidly in response to the operational evidence from the first wave. Insurance capacity for large AI data center projects was already constrained before the current wave of project difficulties. As projects have run into anchor tenant problems, cooling supply chain delays, and power interconnection complications, insurers are applying more rigorous underwriting standards that increase costs for projects that cannot demonstrate the structural foundations that separate the projects that deliver from those that do not.

Lenders are applying similar discipline. Construction loan terms for gigawatt campus projects without confirmed anchor tenants are increasingly reflecting the risk that anchor tenant commitments take longer to secure than originally projected. The cost of capital, in other words, is pricing the structural risk that developers are carrying when they commit to construction before confirming the tenant and power access that de-risk the investment. That market signal is a healthy correction, and it will accelerate the adoption of the tenant-first, power-confirmed development model that the evidence from the first wave clearly supports.

The Regulatory Environment Is Adding New Variables

Beyond operational constraints, gigawatt campus development is navigating a regulatory environment that is itself in transition. The political backlash against data center construction that has produced moratorium bills in Maine and legislative scrutiny in Pennsylvania is creating new uncertainty for projects that require approvals across multiple categories simultaneously. Air permits, water permits, and zoning approvals are all subject to challenge from a public increasingly aware of the electricity cost and environmental implications of large AI campuses.

The Fermi America situation illustrates this clearly. The project had secured its air permit, which was a genuine milestone. However, it was simultaneously managing a land lease with Texas Tech University, a relationship with the Department of Energy over the adjacent Pantex facility, a prospective nuclear deployment requiring its own approvals, and shareholder litigation from its first tenant’s departure. At gigawatt scale, the number of dependencies that can independently cause material delay is significantly larger than at smaller project scales.

The Federal and Political Dimension

Federal relationships have become a variable in project outcomes at gigawatt scale in ways that conventional data center development never required. The Commerce Department has intervened in AI infrastructure investment decisions through both export controls and, in some cases, direct advocacy for specific projects. The reported clash between Fermi America’s CEO and Commerce Secretary Howard Lutnick at the Nvidia GTC conference illustrates how federal relationships can become consequential. Projects of this size are, in effect, national infrastructure, and they are subject to the political dynamics that come with that status.

The Ratepayer Protection Pledge, signed at the White House in March 2026, signals the direction of regulatory pressure in a way that developers and utility partners cannot ignore. Utilities committing to interconnect gigawatt campuses on aggressive timelines are simultaneously under pressure to demonstrate they are not providing preferential treatment to industrial customers at the expense of residential ratepayers.

The Managing Community Relations Problem

The industry’s track record on community relations has been, at best, mixed. Data center operators have often approached community engagement as a checkbox exercise rather than a genuine dialogue, announcing projects without meaningful local consultation and relying on economic development arguments that are less compelling to residents already experiencing electricity rate increases than to governors measuring investment flows. At gigawatt scale, that approach is no longer viable.

A project that generates sustained community opposition can face permit challenges, legislative action, and media scrutiny that add months or years to development timelines regardless of its technical merits. The operators managing this dimension most effectively are those who engage communities before announcements rather than after, who make specific and enforceable commitments about electricity rate impacts, and who build relationships with local institutions that have standing in the regulatory processes governing permits and approvals. That investment in community relations is not charity. It is risk management for assets whose development timelines are long enough that sustained opposition can be financially material.

What the Next Phase of Gigawatt Campus Development Looks Like

The first generation of gigawatt campus projects has been a learning experience, and the lessons are now sufficiently visible that the second generation is already incorporating them. The projects that will succeed over the next three to five years will be more conservative in their public projections, more rigorous in their pre-commitment due diligence, and more deliberate in the sequencing of tenant commitments, power access confirmation, and construction starts.

The power strategy for next-generation gigawatt campuses will also be more sophisticated than the binary choice between grid interconnection and behind-the-meter generation that characterised the first wave. Developers now structure projects around multiple power sources, using grid interconnection as the primary supply, behind-the-meter generation as firm backup, and energy storage as the bridge between them. That hybrid approach is more expensive and more complex to operate. However, it is more resilient to the individual failure modes that derailed first-generation projects and more credible to hyperscaler customers evaluating operational reliability over a ten-year contract term.

Next-generation projects will address the cooling supply chain differently. Developers who worked through the constraints of the first wave are now building procurement strategies that begin 24 to 30 months before planned commissioning. In some cases, they start equipment procurement before confirming anchor tenants, which requires a degree of capital commitment and operational confidence that not all developers can sustain. As we have covered in our analysis of how data centers are becoming power infrastructure companies, operators who treat supply chain management as a core strategic function are building the operational capability that will determine which projects in the next wave deliver on their timelines and which do not.

The Competitive Moat That Execution History Creates

The operators who have been through this cycle once are approaching their next projects differently. They are slower to announce, more disciplined in their public timelines, and more focused on locking in foundational commitments before taking capital from public markets. That discipline, born from operational experience, may prove to be the most durable competitive advantage in the gigawatt campus market.

The Valuation Reset That Is Already Underway

The market repricing of gigawatt campus assets is not complete, but it is underway. The equity premium that projects with confirmed anchor tenants and grid-connected power access command over those without is widening as the operational evidence accumulates. Build-ready sites in established utility service territories with confirmed interconnection agreements are trading at valuations that would have seemed excessive two years ago. Aspirational sites with large land areas but no confirmed power or tenant are trading at discounts that reflect the operational risk now priced into the category.

For investors, the gigawatt campus category has proven to be a more complex risk profile than the AI infrastructure growth narrative implied. Fermi America’s decline of more than 80% from its IPO high illustrates the gap between the market’s initial enthusiasm and the operational reality of delivering capacity at unprecedented scale. Projects that survive the current shakeout and reach commissioning will be the reference points for the next wave. They will demonstrate what the category can achieve when developers solve the anchor tenant problem, respect the cooling supply chain, and build a credible power interconnection strategy. As we have covered in our analysis of the time-to-power crisis as AI infrastructure’s hidden scaling ceiling, the projects that get power right are the ones that everything else follows from. The market is now beginning to price that conclusion with appropriate urgency.

What the Hyperscaler Pullback Actually Signals

Microsoft’s decision to walk away from approximately 2 gigawatts of preleased capacity in early 2026 reflects a reassessment that is partly about demand modelling and partly about a more disciplined approach to which development partners can actually deliver. The gigawatt campus is not a failed category. It is a maturing one, and maturity in infrastructure means trading ambition for execution, and hype for evidence. The projects that get this right over the next three years will not just deliver capacity. They will redefine what the category can achieve when developers properly understand and address its structural foundations from the outset rather than discover them through operational failure.

Related Posts

Please select listing to show.
Scroll to Top