ChannelLife New Zealand - Industry insider news for technology resellers
Story image

NVIDIA launches advanced DGX SuperPOD with Blackwell GPUs

Today

NVIDIA has unveiled its latest enterprise AI infrastructure, the DGX SuperPOD, integrated with Blackwell Ultra GPUs, set to enhance AI processing capabilities for various industries.

Enterprises are provided with the new DGX GB300 and DGX B300 systems, facilitating the deployment of DGX SuperPOD AI supercomputers that promise enhanced AI reasoning particularly suited for applications necessitating rapid responses. These systems are designed to serve the needs of AI workloads that demand considerable computing power for pretraining, post-training, and operation phases.

Jensen Huang, Founder and CEO of NVIDIA, highlighted the race among companies to establish AI factories capable of handling the escalating demands of reasoning AI. "AI is advancing at light speed, and companies are racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling," he stated. "The NVIDIA Blackwell Ultra DGX SuperPOD provides out-of-the-box AI supercomputing for the age of agentic and physical AI."

The DGX GB300 model boasts the NVIDIA Grace Blackwell Ultra Superchips, consisting of a robust mix of 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs, featuring a liquid-cooled infrastructure designed for managing real-time agent responses on advanced AI models.

Alternatively, the air-cooled DGX B300 model employs the NVIDIA B300 NVL16 architecture. This setup is particularly tailored to accommodate the computational requirements faced by data centres in generative and agentic AI applications.

NVIDIA has also announced the NVIDIA Instant AI Factory, a managed service powered by Blackwell Ultra-enhanced DGX SuperPODs. Equinix is positioned as the initial provider of the newly announced DGX systems within its configured AI-ready data centres, spread across 45 global locations.

The DGX SuperPOD, configured with the DGX GB300 systems, has the capability to expand across tens of thousands of NVIDIA Grace Blackwell Ultra Superchips, interconnected using NVIDIA's NVLink, Quantum-X800 InfiniBand, and Spectrum-X Ethernet networking technologies. This infrastructure is intended to boost the most demanding compute tasks, with DGX GB300 systems reportedly delivering up to 70 times more AI performance than past versions using NVIDIA Hopper systems, incorporating 38TB of swift memory for optimal performance and scalability.

The DGX B300 model aims to introduce energy-efficient AI and reasoning capabilities to a broader range of data centres, reportedly enhancing AI performance by elevenfold for inference and achieving a fourfold acceleration in training compared to previous generations. Each unit is equipped with substantial HBM3e memory, advanced networking capabilities, along with essential components like NVIDIA ConnectX-8 SuperNICs and BlueField-3 DPUs.

To facilitate the automated management and operation of these infrastructures, NVIDIA has introduced Mission Control software, specifically designed for Blackwell-based DGX systems. Their promise extends to supporting enterprise-level AI applications through NVIDIA's AI Enterprise software platform, which includes a suite of NVIDIA microservices and other resources to enhance AI agent performance.

As AI infrastructure demand continues to rise, NVIDIA's Instant AI Factory, in collaboration with Equinix, aims to furnish businesses with AI-ready facilities that are optimized for efficient model training and real-time reasoning tasks, reducing the time and effort usually involved in infrastructure setup.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X