Intel Finds a Lifeline For its AI Chips by Partnering with NVIDIA’s Blackwell Ecosystem in a “Hybrid” Rack-Scale AI Platform

**Intel Showcases Hybrid AI Server Featuring NVIDIA’s Blackwell Technology: Promising Performance Boosts**

Intel has reportedly integrated its Gaudi 3 rack-scale AI solution with NVIDIA’s latest technology stack, combining its own AI chips with NVIDIA’s Blackwell GPUs to deliver impressive performance upgrades.

We are all aware that Intel’s AI chips, particularly the Gaudi lineup, have gained significant industry adoption. However, Intel has faced challenges competing against heavyweights like NVIDIA and AMD in terms of AI market revenue. Now, Intel appears to be pursuing a new strategy to promote its Gaudi platform.

According to SemiAnalysis, Intel plans to offer customers a new Gaudi 3 rack-scale system featuring NVIDIA’s Blackwell B200 GPU in a hybrid configuration, alongside ConnectX networking. This announcement was notably highlighted at the recent OCP Global Summit, where Intel, also known as Team Blue, showcased its unique approach to capitalize on the rack-scale AI segment.

### How Does This Hybrid System Work?

This system represents a unique implementation, where Intel’s Gaudi 3 AI chips focus on the ‘decode’ portion of inferencing workloads, while NVIDIA’s Blackwell B200 GPUs handle the more intensive ‘prefill’ stages. Blackwell GPUs excel at performing large matrix-multiply bursts across the full context, thanks to their high-performance architecture, making them ideal for prefill workloads.

Meanwhile, Intel’s Gaudi 3 chips in this setup prioritize memory bandwidth and Ethernet-centric scale-out, fitting perfectly in this rack-scale combination. The result is a complementary pairing where each component handles the tasks it executes best.

### Networking and Hardware Specifications

On the networking front, the rack utilizes NVIDIA’s ConnectX-7 400 GbE NICs on the compute trays, combined with Broadcom’s Tomahawk 5 switches capable of 51.2 Tb/s at the rack level to ensure seamless all-to-all connectivity.

According to SemiAnalysis, each compute tray includes two Xeon CPUs, four Gaudi 3 AI chips, four NICs, and one NVIDIA BlueField-3 DPU. Each rack contains a total of sixteen trays.

### Strategic Positioning and Performance Claims

Intel’s Gaudi platform positions itself as a cost-effective decode engine within an ecosystem largely dominated by NVIDIA. This hybrid solution embodies the phrase, “If you can’t beat them, join them.”

Preliminary claims suggest that this rack-scale configuration achieves 1.7 times faster prefill performance compared to a B200-only baseline when running small, dense models. However, it’s important to note that these claims have yet to undergo independent verification.

### What Does This Mean for Intel and NVIDIA?

This hybrid approach benefits Intel by enabling the company to monetize the Gaudi platform through integration into a comprehensive rack-scale system. For NVIDIA, it’s a testament to the strength of their networking technologies and GPU performance.

### Challenges and Outlook

Despite the promising prospects, the Gaudi AI platform still faces hurdles, particularly due to its relatively immature software stack, which could limit widespread adoption. Furthermore, with the Gaudi architecture reportedly set to be phased out within a few months, it remains uncertain whether this rack-scale hybrid configuration will achieve mainstream traction comparable to other alternatives on the market.

In summary, Intel’s collaboration with NVIDIA to create a hybrid AI server combining Gaudi 3 chips and Blackwell GPUs presents an innovative approach in the rack-scale AI space. While performance claims are encouraging, upcoming months will reveal how this integration fares in terms of adoption, software maturity, and long-term viability.
https://wccftech.com/intel-finds-a-lifeline-for-its-ai-chips-by-partnering-with-nvidia-blackwell-ecosystem/

Leave a Reply

Your email address will not be published. Required fields are marked *