CoreWeave used Google Cloud Next 2026 this week to announce a suite of capabilities aimed at eliminating the friction that prevents enterprises from running AI workloads across multiple cloud environments simultaneously. The announcements center on three new products: CoreWeave Interconnect, SUNK Anywhere, and LOTA Cross-Cloud. Together they target the three primary failure points in cross-cloud AI infrastructure โ networking bottlenecks, orchestration silos, and data movement costs โ that have historically forced enterprises to choose a single cloud provider for large-scale AI deployments.
At the center of the update, CoreWeave introduced CoreWeave Interconnect, a private dedicated connectivity service that links its infrastructure directly with Google Cloud through Googleโs Partner Cross-Cloud Interconnect capability. The service reduces deployment timelines from months to days by removing the complexity of third-party networking providers. It provides dedicated bandwidth, low-latency connectivity, and built-in MACsec encryption. Microsoft Azure is expected to join the collaboration later in 2026, which will extend the interconnect beyond the initial Google Cloud integration.
What the New Products Do
SUNK Anywhere extends CoreWeave’s Slurm-on-Kubernetes training orchestration system across cloud and on-premises environments, giving enterprise AI teams a unified control plane for scheduling and scaling distributed training jobs across CoreWeave, Google Cloud, AWS, and Azure simultaneously. LOTA Cross-Cloud extends CoreWeave’s data caching technology to enable near-local data throughput of up to 7 gigabytes per second per GPU to compute resources in other environments, without requiring data movement. The combination eliminates both the networking bottleneck and the data egress costs that make multi-cloud AI prohibitively expensive for most enterprise operators. As covered in our analysis of the rise of inference clouds, the ability to distribute inference workloads across cloud environments without performance degradation is one of the defining competitive challenges for neocloud operators in 2026.
What It Means for the Market
“Collaborating with Google Cloud extends the reach of CoreWeave’s AI-native platform through Google’s global network,” said Chen Goldberg, EVP of Product and Engineering at CoreWeave. “CoreWeave Interconnect introduces a solution to a problem that has prevented organizations from reaching across clouds to leverage AI resources.” The announcements also carry strategic implications for Google Cloud’s positioning in the enterprise AI market.
By enabling CoreWeave workloads to run natively within its network fabric, Google Cloud signals a genuine commitment to multi-cloud interoperability at a moment when most hyperscalers have historically resisted enabling customers to easily move workloads to competitors. CoreWeave also announced expanded integrations with Weights and Biases covering Google’s Gemini CLI, Gemma model access, and expanded TPU utilization telemetry, deepening the operational toolchain available to developers building across cloud environments.
