Supermicro announces integrated A100 GPU-powered systems
FYI, this story is more than a year old
Super Micro Computer has announced two new systems designed for artificial intelligence (AI) deep learning applications that leverage the third-generation NVIDIA HGX technology with the new NVIDIA A100 Tensor Core GPUs as well as full support for the new NVIDIA A100 GPUs across the company’s broad portfolio of 1U, 2U, 4U and 10U GPU servers.
NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics.
“Expanding upon our portfolio of GPU systems and NVIDIA HGX-2 system technology, Supermicro is introducing a new 2U system implementing the new NVIDIA HGX A100 4 GPU board (formerly codenamed Redstone) and a new 4U system based on the new NVIDIA HGX A100 8 GPU board (formerly codenamed Delta) delivering 5 PetaFLOPS of AI performance,” says Supermicro CEO and president Charles Liang.
“As GPU accelerated computing evolves and continues to transform data centers, Supermicro will provide customers the very latest system advancements to help them achieve maximum acceleration at every scale while optimising GPU utilisation. These new systems will significantly boost performance on all accelerated workloads for HPC, data analytics, deep learning training and deep learning inference.”
As a balanced data centre platform for HPC and AI applications, Supermicro’s new 2U system leverages the NVIDIA HGX A100 4 GPU board with four direct-attached NVIDIA A100 Tensor Core GPUs using PCI-E 4.0 for maximum performance and NVIDIA NVLink for high-speed GPU-to-GPU interconnects.
This GPU system accelerates compute, networking and storage performance with support for one PCI-E 4.0 x8 and up to four PCI-E 4.0 x16 expansion slots for GPUDirect RDMA high-speed network cards and storage such as InfiniBand HDR, which supports up to 200Gb per second bandwidth.
“AI models are exploding in complexity as they take on next-level challenges such as accurate conversational AI, deep recommender systems and personalised medicine,” says NVIDIA accelerated computing general manager and vice president Ian Buck.
“By implementing the NVIDIA HGX A100 platform into their new servers, Supermicro provides customers the powerful performance and massive scalability that enable researchers to train the most complex AI networks at unprecedented speed.”
Optimised for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs.
The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand.
The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards.
Ideal for deep learning training, data centres can use this scale-up platform to create next-gen AI and maximise data scientists’ productivity with support for ten x16 expansion slots.
Customers can expect a significant performance boost across Supermicro’s extensive portfolio of 1U, 2U, 4U and 10U multi-GPU servers when they are equipped with the new NVIDIA A100 GPUs.
For maximum acceleration, Supermicro’s new A+ GPU system supports up to eight full-height double-wide (or single-wide) GPUs via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes without any PCI-E switch for the lowest latency and highest bandwidth.
The system also supports up to three additional high-performance PCI-E 4.0 expansion slots for a variety of uses, including high-performance networking connectivity up to 100G. An additional AIOM slot supports a Supermicro AIOM card or an OCP 3.0 mezzanine card.
With 1U, 2U, 4U, and 10U rackmount GPU systems; Ultra, BigTwin, and embedded systems supporting GPUs; as well as GPU blade modules for our 8U SuperBlade, Supermicro offers the industry’s widest and deepest selection of GPU systems to power applications from Edge to Cloud.
To deliver enhanced security and unprecedented performance at the edge, Supermicro plans to add the new NVIDIA EGXB A100 configuration to its edge server portfolio.
The EGX A100 converged accelerator combines a Mellanox SmartNIC with GPUs powered by the new NVIDIA Ampere architecture, so enterprises can run AI at the edge more securely.