STMicroelectronics has expanded its high-voltage data center power portfolio, introducing two new architectures 800 VDC to 12V and 800 VDC to 6V built in alignment with NVIDIA’s 800 VDC reference design. These additions extend its earlier 800 VDC to 50V solution and position the company deeper within next-generation AI infrastructure stacks.
The shift toward 800 VDC distribution marks a structural redesign of how power moves across hyperscale facilities. It enables higher efficiency, reduces transmission losses, and supports denser compute clusters required for modern AI workloads.
“As AI infrastructure compute scale continues to expand fast, it requires higher voltage distribution and greater density, which can only be achieved with system-level innovation for each of the different AI server form factors,” said Marco Cassis, President, Analog, Power & Discrete, MEMS and Sensors Group Head of STMicroelectronics’ Strategy, System Research and Applications, Innovation Office at STMicroelectronics. “With these new converters for 800 VDC power distribution, ST brings a complete set of solutions to support the deployment of gigawatt-scale compute infrastructure with more efficient, scalable, and sustainable power architectures.”
The expansion into 12V and 6V conversion layers signals a broader industry transition. AI servers now operate across varied architectures, each defined by GPU generation, rack density, thermal constraints, and workload specialization. As a result, 50V, 12V, and 6V intermediate buses will coexist across data centers.
This fragmentation is not inefficiency, it reflects optimization. Training clusters, inference farms, and high-density deployments each demand tailored power delivery strategies.
800 VDC to 12V: Eliminating Conversion Complexity
The 800 VDC to 12V architecture introduces a more direct path from rack-level power shelves to AI accelerators. It removes the conventional 54V intermediate stage, which historically added inefficiencies and system complexity.
This approach reduces conversion steps, lowers system-level losses, and simplifies integration for future GPU platforms. Additionally, the newly developed high-density power delivery board (PDB) achieves efficiency levels exceeding those of traditional two-stage architectures. As a result, operators gain improved rack-level efficiency alongside reduced copper usage. The 800 VDC to 6V design targets a different optimization layer physical proximity to compute.
It allows system builders to position power conversion closer to GPUs, minimizing resistive losses and improving transient response under dynamic workloads. This becomes critical in large-scale AI training environments, where rapid load shifts can destabilize performance. Furthermore, reducing conversion stages enables more compact and efficient server designs, especially in ultra-dense GPU configurations.
Building a Full-Stack Power Ecosystem for AI Data Centers
STMicroelectronics’ broader strategy integrates multiple semiconductor technologies silicon, silicon carbide (SiC), and gallium nitride (GaN), alongside analog, mixed-signal, and microcontroller components. This full-stack approach allows tighter control over efficiency, density, and thermal performance.
Notably, in October 2025, the company introduced a GaN-based LLC converter prototype operating directly from 800 V at 1 MHz. It delivered over 98% efficiency and power density exceeding 2,600 W/in at 50 V within a compact footprint. However, the significance of this expansion extends beyond component innovation. Power architecture is emerging as a central constraint and opportunity in AI infrastructure scaling.
Hyperscalers now face gigawatt-scale deployment challenges where incremental efficiency gains translate into substantial operational savings. Therefore, architectures that reduce losses, minimize material usage, and support flexible server designs will define competitive advantage.
Moreover, alignment with NVIDIA’s reference design strengthens ecosystem standardization, which could accelerate adoption across OEMs and system integrators. Consequently, power delivery is no longer a background system, it is becoming a primary design axis for AI data centers.
As compute density rises and workloads intensify, innovations at the power layer will increasingly dictate infrastructure performance, cost, and scalability. STMicroelectronics’ expanded 800 VDC portfolio reflects this shift, signaling a future where power architecture and compute architecture evolve in tandem.
