Supermicro Launches Xeon 6900 SuperBlade Systems

Share the Post:
Supermicro 6U

Supermicro expands SuperBlade portfolio with high-density platforms

Supermicro Inc. has introduced a new high-density SuperBlade system designed for performance- and efficiency-driven data center deployments, expanding its blade server portfolio with air-cooled and direct liquid-cooled configurations.

The company said the new system, identified as the SBI-622BA-1NE12-LCC, is powered by dual Intel Xeon 6900 Series processors and supports up to 256 performance cores per blade. The platform is aimed at large-scale compute environments supporting artificial intelligence, high-performance computing, cloud infrastructure and enterprise workloads.

According to Supermicro, the system supports both traditional air cooling and direct liquid cooling options, including CPU-only and CPU, DIMM and VRM cold plate configurations, enabling higher rack densities while managing power and thermal constraints.

Architecture focused on density and power efficiency

Supermicro said a single 6U SuperBlade enclosure can house up to 10 blades, allowing configurations of as many as 25,600 processing cores per rack when fully populated. Each processor can support up to 128 performance cores with a thermal design power of up to 500 watts.

The company said the design relies on shared power supplies, cooling fans, integrated networking and centralized chassis management to consolidate compute capacity into a smaller physical footprint. According to Supermicro, one enclosure can deliver performance comparable to a traditional server rack while reducing power consumption and operating costs.

“Supermicro’s SuperBlade architecture delivers industry-leading server density and energy efficiency, forming the foundational infrastructure for many of the world’s largest and most powerful high-performance computing systems,” said Charles Liang, President and Chief Executive Officer of Supermicro.

“This new iteration is the most core-dense SuperBlade we’ve ever created, providing customers with a scalable, efficient platform that leverages shared resources and direct liquid cooling to achieve maximum performance per watt and per square foot in modern data centers.”

Chassis management and remote control features

Supermicro said the SuperBlade chassis management module provides centralized monitoring and control across individual blades, power supplies, cooling systems and network switches. Administrators can set power caps at the server level, allocate power resources across blades and perform remote reboot and reset operations.

The company said the management system enables remote access to BIOS configurations and operating system consoles through Serial over LAN and embedded KVM functions. Because the controller operates on a separate processor, monitoring and control functions remain available regardless of server power state or CPU activity.

Hot-swappable design targets data center modernization

The blade platform incorporates hot-swappable components intended to simplify maintenance and upgrades. Supermicro said the design can reduce cabling requirements by up to 93 percent and lower space consumption by as much as 50 percent compared with traditional rack-mounted servers.

According to the company, these reductions can help lower total cost of ownership and support phased modernization of existing data center environments without sacrificing compute performance.

Memory, storage and expansion capabilities

Supermicro said each blade supports up to 24 DIMM slots, enabling configurations of as much as 3 terabytes of DDR5 RDIMM memory operating at 6,400 MT/s or up to 1.5 terabytes of DDR5 MRDIMM memory at 8,800 MT/s.

Storage options include up to four PCIe 5.0 NVMe solid-state drives, two hot-swappable E1.S SSDs and two M.2 drives. The platform also supports three PCIe 3.0 x16 expansion cards per blade, allowing configurations such as multiple 400-gigabit InfiniBand or Ethernet adapters or a mix of high-speed networking and GPU acceleration.

Supermicro said integrated networking is provided through dual 25-gigabit Ethernet switches with 100-gigabit uplinks located at the rear of the enclosure, reducing external cabling requirements while supporting high-bandwidth workloads.

The company said the system is designed to address rising compute density, power efficiency and cooling demands in modern data centers deploying AI, HPC and graphics-intensive applications.

Related Posts

Please select listing to show.
Scroll to Top