To address the escalating energy demands of artificial intelligence infrastructure, Hitachi, Ltd. has introduced an 800-volt direct current architecture designed specifically for high-density AI data centers. The system integrates into the Vera Rubin DSX reference design and aligns with NVIDIA’s Omniverse DSX Blueprint, signaling a deeper convergence between power engineering and digital simulation.
The announcement, showcased at GTC 2026, reflects a broader industry pivot toward rethinking power delivery as a foundational constraint in scaling AI compute. As AI workloads surge, traditional alternating current systems increasingly struggle to maintain efficiency, reliability, and density requirements.
Digital simulation meets physical infrastructure
At the core of Hitachi’s architecture lies a 3D simulation model built using OpenUSD, designed to replicate energy flow across the entire electrical chain from utility grid to rack-level consumption. This approach enables operators to model real-world performance before deployment, reducing uncertainty in large-scale AI infrastructure builds.
The system integrates with NVIDIA Omniverse as SimReady assets, effectively bridging the gap between design environments and operational workflows. Consequently, developers can simulate complex interactions between compute loads, power systems, and grid dynamics within a unified digital environment.
The model introduces a new level of visibility into how AI-driven workloads impact upstream energy systems. By capturing transient load behaviors and power fluctuations, it allows stakeholders to anticipate bottlenecks and optimize configurations ahead of physical deployment.
Stabilizing power for volatile AI workloads
AI infrastructure introduces highly variable power demand patterns, often characterized by rapid spikes linked to compute-intensive training cycles. Hitachi’s architecture addresses this challenge through advanced power electronics and digital control systems that actively manage power quality.
Within the simulation environment, the system analyzes disturbances caused by spiky workloads and dynamically smooths output using integrated battery energy storage systems. This capability ensures consistent delivery of 800 VDC power to racks while maintaining compliance with grid stability requirements.
Moreover, the architecture extends beyond real-time control into predictive management. By integrating thermal, condition, and asset health models, it enables both predictive and prescriptive maintenance strategies across the entire energy chain from data center infrastructure to substations and grid interfaces.
Scaling toward gigawatt AI factories
The strategic importance of this architecture becomes clearer when viewed against projected energy demand curves. AI workloads are expected to drive electricity requirements to as much as 125 gigawatts of capacity by 2030, forcing a fundamental redesign of how power is generated, distributed, and consumed.
Hitachi positions its 800 VDC system as a critical enabler of this transition. The architecture can handle up to 15 times more power than legacy systems, allowing operators to significantly increase compute density while reducing overall electrical losses and spatial footprint.
Furthermore, the compatibility with NVIDIA’s evolving rack designs ensures that the solution remains aligned with the rapid innovation cycles of AI hardware. This interoperability reduces integration friction and accelerates time-to-deployment for hyperscale AI facilities.
A strategic shift in infrastructure thinking
This development underscores a broader shift in how the industry approaches AI infrastructure. Power systems are no longer treated as passive enablers but as active, intelligent layers that must co-evolve with compute architectures.
Therefore, the collaboration between Hitachi and NVIDIA signals a move toward tightly coupled ecosystems where simulation, hardware, and energy systems operate in sync. As AI factories scale toward gigawatt levels, such integrated approaches will likely define the next phase of data center innovation.
In effect, the 800 VDC architecture represents more than a technical upgrade. It marks a transition toward energy-aware computing infrastructure where efficiency, predictability, and scalability converge to support the demands of next-generation AI systems.
