SKT Builds AI Inference Servers with Arm & Rebellions

Share the Post:
AI inference servers

SK Telecom is tightening its grip on the AI data center stack. The company has signed a strategic MoU with Arm and Rebellions to co-develop AI inference server solutions designed for next-generation data centers.

The partnership focuses on integrating Arm’s AGI CPU with Rebellions’ upcoming RebelCard accelerator. SKT plans to test and validate these systems inside its own AI data centers, turning internal infrastructure into a proving ground for production-grade deployments.

The move signals a shift from component-level optimization toward vertically aligned infrastructure where compute, software, and deployment strategy evolve together.

Heterogeneous compute reshapes inference economics

The alliance centers on heterogeneous computing architectures, where CPUs and AI accelerators split workloads. CPUs manage system orchestration and general-purpose processing, while accelerators handle inference-heavy tasks.

“Compared to GPU-based servers, this architecture offers higher power efficiency and processing efficiency for AI inference tasks, as well as reduced operating costs. As a result, it is recognized as an efficient server architecture for data centers running large-scale AI services,” SKT said in a statement.

This positioning directly challenges GPU-dominant infrastructure, especially in inference-heavy environments where cost-per-query and power density define competitiveness.

Arm AGI CPU and RebelCard target scale efficiency

Arm’s AGI CPU enters the data center with a focus on high-density inference environments and large-scale AI deployments. Rebellions’ RebelCard complements that design with specialization in inference acceleration.

The companies have already demonstrated early viability. Last month, Arm and Rebellions ran a live agentic AI service powered by OpenAI’s GPT OSS 120B model using their combined silicon stack. That demonstration positions the architecture as a credible alternative for hyperscale workloads, especially where efficiency outweighs raw training throughput.

SKT will deploy these AI inference servers within its own AI data centers to validate performance and operational stability. The company is also considering running its A.X K1 sovereign AI foundation model on the infrastructure. This internal-first deployment strategy reduces go-to-market risk while aligning hardware innovation with real-world workloads.

“By providing a full package that combines infrastructure optimized for inference with our sovereign AI foundation model A.X K1, we will further enhance the competitiveness of our AI data centers,” said Lee Jae-shin, SKT’s head of AI business development.

AIDC ambition drives end-to-end control

The collaboration feeds into SKT’s broader ambition to become a full-scale AI data center (AIDC) developer. The company aims to control design, construction, and operations across the entire lifecycle.

In November 2025, SKT declared its intent to oversee complete AIDC projects. More recently, the International Telecommunication Union approved SKT’s AIDC architecture as an international standard, setting a unified framework for global interoperability.

This standardization effort strengthens SKT’s positioning beyond infrastructure operator into ecosystem architect.

Beyond hardware: full-stack integration strategy

The partnership extends past silicon integration into full-stack development. The companies will co-develop firmware and broader software layers to ensure tight coupling between hardware and orchestration systems.

In a separate statement, Rebellions emphasized that the collaboration spans the entire value chain from infrastructure design to deployment and validation in real-world environments. After validation, the partners plan to pursue broader commercial rollouts, targeting sovereign AI data center markets globally, with a strong focus on Asia.

“By providing our ‘RebelCard’ alongside our full-stack software, Rebellions has become a core pillar supporting next-generation AI data centers. We expect this ‘one-team’ collaboration of experts to serve as a significant precedent in the industry for building AI-specialized infrastructure,” said Rebellions CTO Jinwook Oh.

Infrastructure convergence defines next AI phase

As AI systems scale, CPUs are reasserting their role as coordinators of complex infrastructure layers.

“Together with Rebellions and SKT, we’re enabling scalable infrastructure for sovereign AI and telecommunications markets,” said Eddie Ramirez, Arm’s vice president of the Cloud AI Business Unit’s go-to-market strategy.

He noted that CPUs orchestrate workloads across accelerators, memory, and networking, an increasingly critical function as distributed AI systems grow more complex.

SKT’s alliance with Arm and Rebellions reflects a broader industry pivot toward tightly integrated, efficiency-first AI infrastructure. Instead of chasing raw compute scale, operators now optimize for inference economics, sovereignty, and deployment agility.

However, success will depend on execution at scale where performance, cost, and ecosystem adoption converge.

Related Posts

Please select listing to show.
Scroll to Top