NVIDIA’s Spectrum-X Ethernet With MRC Redefines AI Networking: OpenAI, Microsoft, Oracle Already Deploying
NVIDIA today announced that its Spectrum-X Ethernet platform, now equipped with the Multipath Reliable Connection (MRC) protocol, is rapidly becoming the backbone of the world’s largest AI factories. The open specification, contributed to the Open Compute Project, has already been deployed by OpenAI, Microsoft, and Oracle to power gigascale AI training runs—setting a new industry benchmark for performance and reliability.
“Deploying MRC in the Blackwell generation was very successful and made possible by a strong collaboration with NVIDIA,” said Sachin Katti, head of industrial compute at OpenAI. “MRC’s end-to-end approach enabled us to avoid much of the typical network-related slowdowns and interruptions and maintain the efficiency of frontier training runs at scale.”
MRC is an RDMA transport protocol that allows a single connection to spread traffic across multiple network paths, improving throughput, load balancing, and availability. Think of replacing a single-lane road with an intelligent grid system that reroutes cars around traffic jams in real time—that is the leap MRC delivers for AI data centers.
Background
Building large-scale AI models requires networking that can handle unprecedented data volumes without bottlenecks. Traditional Ethernet fabrics often suffer from packet loss and congestion, which stalls GPU utilization and slows training. NVIDIA’s Spectrum-X was purpose-built to solve these challenges, combining hardware designed for AI workloads with advanced telemetry and fabric control.

Microsoft’s Fairwater and Oracle’s Abilene data centers—two of the largest AI factories ever built—now rely on MRC over Spectrum-X Ethernet to meet the extreme performance and efficiency demands of frontier large language models. These deployments prove MRC works at massive scale, delivering high GPU utilization by balancing traffic across all available paths and dynamically avoiding overloaded routes.

What This Means
The open release of MRC means any organization can build AI networks that match the performance of the hyperscalers. By enabling intelligent retransmission and real-time congestion avoidance, MRC minimizes the impact of data loss on long-running jobs—dramatically reducing GPU idle time. Administrators also gain granular visibility into traffic flows, simplifying troubleshooting and operational management.
This development signals a shift from proprietary, closed networking solutions to a standardized, open approach that accelerates AI innovation. With MRC, NVIDIA has effectively raised the bar for what Ethernet can achieve in the AI era, making gigascale training more accessible and efficient. Industry leaders are already voting with their deployments, confirming that this protocol is not just theoretical but a proven, production-ready technology.
Related Articles
- Enhancing Man Pages for tcpdump and dig: A Q&A Guide
- How to Add Effective Examples to Man Pages: A Step-by-Step Guide for Beginners and Infrequent Users
- Apple Discontinues $599 Mac Mini, Raising Entry Price to $799 Amid Chip Shortage
- How to Create a User-Friendly Man Page: Step-by-Step Guide
- Newegg's Ultimate AM5 Bundle: Ryzen 7 9800X3D, 32GB DDR5, Gen5 SSD, Free PSU & VPN – $949.99
- Long-Term Privacy at a Bargain: AdGuard VPN 5-Year Deal for $39.97
- Enhancing Man Pages with Practical Examples: A Deep Dive into dig and tcpdump
- Man Pages Get Makeover: New Examples for tcpdump and dig Simplify Network Diagnostics