Nvidia DGX

Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications.[1] The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs (Typically Intel Xeons, with the exception DGX A100 and DGX Station A100, which both utilize AMD EPYC CPUs).[2] The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket or by a PCIe x16 slot.

A rack containing five DGX-1 supercomputers

Models

DGX-1

DGX-1 servers feature 8 GPUs based on the Pascal or Volta daughter cards[3] with 128GB of total HBM2 memory, connected by an NVLink mesh network.[4] The DGX-1 was announced on the 6th of April in 2016.[5] All models are based on a dual socket configuration of Intel Xeon E5 CPUs, and are equipped with the following features.

  • 512 GB of DDR4-2133
  • Dual 10Gb networking
  • 4 x 1.92 TB SSDs
  • 3200W of combined power supply capability
  • 3U Rackmount Chassis

The product line is intended to bridge the gap between GPUs and AI accelerators in that the device has specific features specializing it for deep learning workloads.[6] The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing,[7] while the Volta-based upgrade increased this to 960 teraflops.[8]

The DGX-1 was first available only with the Pascal based configuration, with the first generation SXM socket. The later revision of the DGX-1 offered support for first generation Volta cards via the SXM-2 socket. Nvidia offered upgrade kits that allowed users with a Pascal based DGX-1 to upgrade to a Volta based DGX-1.[9][10]

  • The Pascal based DGX-1 has two variants, one with a 16 core Intel Xeon E5-2698 V3, and one with a 20 core E5-2698 V4. Pricing for the variant equipped with an E5-2698 V4 is unavailable, the Pascal based DGX-1 with an E5-2698 V3 was priced at launch at $129,000[11]
  • The Volta based DGX-1 is equipped with an E5-2698 V4 and was priced at launch at $149,000.[11]

DGX Station

Designed as a turnkey deskside AI supercomputer, the DGX Station is a tower computer that can function completely independently without typical datacenter infrastructure such as cooling, redundant power, or 19 inch racks.

The DGX station was first available with the following specifications.[12]

  • Four Volta-based Tesla V100 accelerators, each with 16 GB of HBM2 memory
  • 480 TFLOPS FP16
  • Single Intel Xeon E5-2698 v4[13]
  • 256 GB DDR4
  • 4x 1.92 TB SSDs
  • Dual 10 Gb Ethernet

The DGX station is water-cooled to better manage the heat of almost 1500W of total system components, this allows it to keep a noise range under 35 dB under load.[14] This, among other features, made this system a compelling purchase for customers without the infrastructure to run rackmount DGX systems, which can be loud, output a lot of heat, and take up a large area. This was Nvidia's first venture into bringing high performance computing deskside, which has since remained a prominent marketing strategy for Nvidia.[15]

DGX-2

The successor of the Nvidia DGX-1 is the Nvidia DGX-2, which uses sixteen Volta-based V100 32GB (second generation) cards in a single unit. It was announced on the 27th of March in 2018.[16] The DGX-2 delivers 2 Petaflops with 512GB of shared memory for tackling massive datasets and uses NVSwitch for high-bandwidth internal communication. DGX-2 has a total of 512GB of HBM2 memory, a total of 1.5TB of DDR4. Also present are eight 100Gb/sec InfiniBand cards and 30.72 TB of SSD storage,[17] all enclosed within a massive 10U rackmount chassis and drawing up to 10 kW under maximum load.[18] The initial price for the DGX-2 was $399,000.[19]

The DGX-2 differs from other DGX models in that it contains two separate GPU daughterboards, each with eight GPUs. These boards are connected by an NVSwitch system that allows for full bandwidth communication across all GPUs in the system, without additional latency between boards.[18]

A higher performance variant of the DGX-2, the DGX-2H, was offered as well. The DGX-2H replaced the DGX-2's dual Intel Xeon Platinum 8168's with upgraded dual Intel Xeon Platinum 8174's. This upgrade does not increase core count per system, as both CPUs are 24 cores, nor does it enable any new functions of the system, but it does increase the base frequency of the CPUs from 2.7 GHz to 3.1 GHz.[20][21][22]

DGX A100 Server

Announced and released on May 14, 2020. The DGX A100 was the 3rd generation of DGX server, including 8 Ampere-based A100 accelerators.[23] Also included is 15TB of PCIe gen 4 NVMe storage,[24] 1 TB of RAM, and eight Mellanox-powered 200GB/s HDR InfiniBand ConnectX-6 NICs. The DGX A100 is in a much smaller enclosure than its predecessor, the DGX-2, taking up only 6 Rack units.[25]

The DGX A100 also moved to a 64 core AMD EPYC 7742 CPU, the first DGX server to not be built with an Intel Xeon CPU. The initial price for the DGX A100 Server was $199,000.[23]

DGX Station A100

As the successor to the original DGX Station, the DGX Station A100, aims to fill the same niche as the DGX station in being a quiet, efficient, turnkey cluster-in-a-box solution that can be purchased, leased, or rented by smaller companies or individuals who want to utilize machine learning. It follows many of the design choices of the original DGX station, such as the tower orientation, single socket CPU mainboard, a new refrigerant-based cooling system, and a reduced number of accelerators compared to the corresponding rackmount DGX A100 of the same generation.[15] The price for the DGX Station A100 320G is $149,000 and $99,000 for the 160G model, Nvidia also offers Station rental at ~$9000 USD per month through partners in the US (rentacomputer.com) and Europe (iRent IT Systems) to help reduce the costs of implementing these systems at a small scale.[26][27]

The DGX Station A100 comes with two different configurations of the built in A100.

  • Four Ampere-based A100 accelerators, configured with 40GB (HBM) or 80GB (HBM2e) memory,
    thus giving a total of 160GB or 320GB resulting either in DGX Station A100 variants 160G or 320G.
  • 2.5 PFLOPS FP16
  • Single 64 Core AMD EPYC 7742
  • 512 GB DDR4
  • 1 x 1.92 TB NVMe OS drive
  • 1 x 7.68 TB U.2 NVMe Drive
  • Dual port 10Gb Ethernet
  • Single port 1Gb BMC port

DGX H100 Server

Announced March 22, 2022[28] and planned for release in Q3 2022,[29] The DGX H100 is the 4th generation of DGX servers, built with 8 Hopper-based H100 accelerators, for a total of 32 PFLOPs of FP8 AI compute and 640GB of HBM3 Memory, an upgrade over the DGX A100s HBM2 memory. This upgrade also increases VRAM bandwidth to 3 TB/s.[30] The DGX H100 increases the rackmount size to 8U to accommodate the 700W TDP of each H100 SXM card. The DGX H100 also has two 1.92TB SSDs for Operating System storage, and 30.72 TB of Solid state storage for application data.

One more notable addition is the presence of two Nvidia Bluefield 3 DPUs,[31] and the upgrade to 400Gb/s InfiniBand via Mellanox ConnectX-7 NICs, double the bandwidth of the DGX A100. The DGX H100 uses new 'Cedar Fever' cards, each with four ConnectX-7 400GB/s controllers, and two cards per system. This gives the DGX H100 3.2Tb/s of fabric bandwidth across Infiniband.[32]

The DGX H100 has two Xeon Platinum 8480C Scalable CPUs (Codenamed Sapphire Rapids)[33] and 2 Terabytes of System Memory.[34]

The DGX H100 was priced at £379,000 or ~$482,000 USD at release.[35]

DGX GH200 AI Supercomputer

Announced May, 2023 open up enormous potential in the age of AI with a new class of AI supercomputer that fully connects 256 NVIDIA Grace Hopper™ Superchips into a singular GPU. NVIDIA DGX™ GH200 is designed to handle terabyte-class models for massive recommender systems, generative AI, and graph analytics, offering 144 terabytes (TB) of shared memory with linear scalability for giant AI models.[36]

DGX SuperPod

The DGX Superpod is a high performance turnkey supercomputer solution provided by Nvidia using DGX hardware.[37] This tightly integrated system combines high performance DGX compute nodes with fast storage and high bandwidth networking to provide a unique plug & play solution to extremely high demand machine learning workloads. The Selene Supercomputer, at the Argonne National Laboratory, is one example of a DGX SuperPod based system.

Selene, built from 280 DGX A100 nodes, ranked 5th on the Top500 list for most powerful supercomputers at the time of its completion, and has continued to remain high in performance. This same integration is available to any customer with minimal effort on their behalf, and the new Hopper based SuperPod can scale to 32 DGX H100 nodes, for a total of 256 H100 GPUs and 64 x86 CPUs. This gives the complete SuperPod a whopping 20TB of HBM3 memory, 70.4 TB/s of bisection bandwidth, and up to 1 ExaFLOP of FP8 AI compute.[38] These SuperPods can then be further joined to create even larger supercomputers.

The upcoming Eos supercomputer, designed, built, and operated by Nvidia,[39][40][41] will be constructed of 18 H100 based SuperPods, totaling 576 DGX H100 systems, 500 Quantum-2 InfiniBand switches, and 360 NVLink Switches, this will allow Eos to deliver 18 EFLOPs of FP8 compute, and 9 EFLOPs of FP16 compute, making Eos the fastest AI supercomputer in the world.[42][43]

As Nvidia does not produce any storage devices or systems, Nvidia SuperPods rely on partners to provide high performance storage. Current storage partners for Nvidia Superpods are Dell EMC, DDN, HPE, IBM, NetApp, Pavilion Data, and VAST Data.[44]

DGX Helios

The DGX Helios supercomputer, which by means of Quantum-2 InfiniBand will combine four DGX GH200 at once.

Accelerators

Comparison of accelerators used in DGX:[45][46][47]



Accelerator
H100
A100 80GB
A100 40GB
V100 32GB
V100 16GB
P100
ArchitectureSocketFP32
CUDA
Cores
FP64 Cores
(excl. Tensor)
Mixed
INT32/FP32
Cores
INT32
Cores
Boost
Clock
Memory
Clock
Memory
Bus Width
Memory
Bandwidth
VRAMSingle
Precision
(FP32)
Double
Precision
(FP64)
INT8
(non-Tensor)
INT8
Dense Tensor
INT32FP16FP16
Dense Tensor
bfloat16
Dense Tensor
TensorFloat-32
(TF32)
Dense Tensor
FP64
Dense Tensor
Interconnect
(NVLink)
GPUL1 Cache SizeL2 Cache SizeTDPGPU
Die Size
Transistor
Count
Manufacturing Process
HopperSXM516896460816896N/A1780 MHz4.8Gbit/s HBM35120-bit3072GB/sec80GB60 TFLOPs30 TFLOPsN/A4000 TOPsN/AN/A2000 TFLOPs2000 TFLOPs1000 TFLOPs60 TFLOPs900GB/secGH10025344KB(192KBx132)51200 KB700W814 mm280BTSMC 4 nm N4
AmpereSXM4691234566912N/A1410 MHz3.2Gbit/s HBM25120-bit2039GB/sec80GB19.5 TFLOPs9.7 TFLOPsN/A624 TOPs19.5 TOPs78 TFLOPs312 TFLOPs312 TFLOPs156 TFLOPs19.5 TFLOPs600GB/secGA10020736KB(192KBx108)40960 KB400W826 mm254.2BTSMC 7 nm N7
AmpereSXM4691234566912N/A1410 MHz2.4Gbit/s HBM25120-bit1555GB/sec40GB19.5 TFLOPs9.7 TFLOPsN/A624 TOPs19.5 TOPs78 TFLOPs312 TFLOPs312 TFLOPs156 TFLOPs19.5 TFLOPs600GB/secGA10020736KB(192KBx108)40960 KB400W826 mm254.2BTSMC 7 nm N7
VoltaSXM351202560N/A51201530 MHz1.75Gbit/s HBM24096-bit900GB/sec32GB15.7 TFLOPs7.8 TFLOPs62 TOPsN/A15.7 TOPs31.4 TFLOPs125 TFLOPsN/AN/AN/A300GB/secGV10010240KB(128KBx80)6144 KB350W815 mm221.1BTSMC 12 nm FFN
VoltaSXM251202560N/A51201530 MHz1.75Gbit/s HBM24096-bit900GB/sec16GB15.7 TFLOPs7.8 TFLOPs62 TOPsN/A15.7 TOPs31.4 TFLOPs125 TFLOPsN/AN/AN/A300GB/secGV10010240KB(128KBx80)6144 KB300W815 mm221.1BTSMC 12 nm FFN
PascalSXM/SXM2N/A17923584N/A1480 MHz1.4Gbit/s HBM24096-bit720GB/sec16GB10.6 TFLOPs5.3 TFLOPsN/AN/AN/A21.2 TFLOPsN/AN/AN/AN/A160GB/secGP1001344KB(24KBx56)4096 KB300W610 mm215.3BTSMC 16 nm FinFET+

See also

References

  1. "NVIDIA DGX-1: Deep Learning Server for AI Research". NVIDIA. Retrieved 2022-03-24.
  2. "NVIDIA DGX Systems for Enterprise AI". NVIDIA. Retrieved 2022-03-24.
  3. "nvidia dgx-1" (PDF).
  4. "inside pascal". 5 April 2016. Eight GPU hybrid cube mesh architecture with NVLink
  5. "NVIDIA Unveils the DGX-1 HPC Server: 8 Teslas, 3U, Q2 2016".
  6. "deep learning supercomputer".
  7. "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  8. "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  9. https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf
  10. https://images.nvidia.com/content/technologies/deep-learning/pdf/DGX-1-UserGuide.pdf
  11. Oh, Nate. "NVIDIA Ships First Volta-based DGX Systems". www.anandtech.com. Retrieved 2022-03-24.
  12. "CompecTA | NVIDIA DGX Station Deep Learning System". www.compecta.com. Retrieved 2022-03-24.
  13. "Intel® Xeon® Processor E5-2698 v4 (50M Cache, 2.20 GHz) - Product Specifications". Intel. Retrieved 2023-08-19.
  14. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-station/dgx-station-data-science-supercomputer-datasheet-v4.pdf
  15. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-station/nvidia-dgx-station-a100-datasheet.pdf
  16. "Nvidia launches the DGX-2 with two petaFLOPS of power". 28 March 2018.
  17. "NVIDIA DGX -2 for Complex AI Challenges". NVIDIA. Retrieved 2022-03-24.
  18. Cutress, Ian. "NVIDIA's DGX-2: Sixteen Tesla V100s, 30 TB of NVMe, only $400K". www.anandtech.com. Retrieved 2022-04-28.
  19. "The NVIDIA DGX-2 is the world's first 2-petaflop single server supercomputer". www.hardwarezone.com.sg. Retrieved 2022-03-24.
  20. https://docs.nvidia.com/dgx/pdf/dgx2-user-guide.pdf
  21. "Product Specifications". www.intel.com. Retrieved 2022-04-28.
  22. "Product Specifications". www.intel.com. Retrieved 2022-04-28.
  23. Ryan Smith (May 14, 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
  24. Tom Warren; James Vincent (May 14, 2020). "Nvidia's first Ampere GPU is designed for data centers and AI, not your PC". The Verge.
  25. "Boston Labs welcomes the DGX A100 to our remote testing portfolio!". www.boston.co.uk. Retrieved 2022-03-24.
  26. Mayank Sharma (2021-04-13). "Nvidia will let you rent its mini supercomputers". TechRadar. Retrieved 2022-03-31.
  27. Jarred Walton (2021-04-12). "Nvidia Refreshes Expensive, Powerful DGX Station 320G and DGX Superpod". Tom's Hardware. Retrieved 2022-04-28.
  28. Newsroom, NVIDIA. "NVIDIA Announces DGX H100 Systems – World's Most Advanced Enterprise AI Infrastructure". NVIDIA Newsroom Newsroom. Retrieved 2022-03-24.
  29. Albert (2022-03-24). "NVIDIA H100: Overview, Specs, & Release Date | SeiMaxim". www.seimaxim.com. Retrieved 2022-08-22.
  30. Walton, Jarred (2022-03-22). "Nvidia Reveals Hopper H100 GPU With 80 Billion Transistors". Tom's Hardware. Retrieved 2022-03-24.
  31. Newsroom, NVIDIA. "NVIDIA Announces DGX H100 Systems – World's Most Advanced Enterprise AI Infrastructure". NVIDIA Newsroom Newsroom. Retrieved 2022-04-19.
  32. servethehome (2022-04-14). "NVIDIA Cedar Fever 1.6Tbps Modules Used in the DGX H100". ServeTheHome. Retrieved 2022-04-19.
  33. "NVIDIA DGX H100 Datasheet". www.nvidia.com. Retrieved 2023-08-02.
  34. "NVIDIA DGX H100". NVIDIA. Retrieved 2022-03-24.
  35. Every NVIDIA DGX benchmarked & power efficiency & value compared, including the latest DGX H100., retrieved 2023-03-01
  36. "NVIDIA DGX GH200". NVIDIA. Retrieved 2022-03-24.
  37. https://images.nvidia.com/aem-dam/Solutions/Data-Center/nvidia-dgx-superpod-datasheet.pdf
  38. Jarred Walton (2022-03-22). "Nvidia Reveals Hopper H100 GPU With 80 Billion Transistors". Tom's Hardware. Retrieved 2022-03-24.
  39. Vincent, James (2022-03-22). "Nvidia reveals H100 GPU for AI and teases 'world's fastest AI supercomputer'". The Verge. Retrieved 2022-05-16.
  40. Mellor, Chris (2022-03-31). "Nvidia Eos AI supercomputer will need a monster storage system". Blocks and Files. Retrieved 2022-05-21.
  41. Comment, Sebastian Moss. "Nvidia announces Eos, "world's fastest AI supercomputer"". Data Center Dynamics. Retrieved 2022-05-21.
  42. "Nvidia Announces 'Eos' Supercomputer". HPCwire. 2022-03-22. Retrieved 2022-03-24.
  43. "NVIDIA Eos: the world's fastest AI supercomputer, 4608 x DGX H100 GPUs". TweakTown. 2022-03-22. Retrieved 2022-05-21.
  44. Mellor, Chris (2022-03-31). "Nvidia Eos AI supercomputer will need a monster storage system". Blocks and Files. Retrieved 2022-04-29.
  45. Smith, Ryan (March 22, 2022). "NVIDIA Hopper GPU Architecture and H100 Accelerator Announced: Working Smarter and Harder". AnandTech.
  46. Smith, Ryan (May 14, 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
  47. "NVIDIA Tesla V100 tested: near unbelievable GPU power". TweakTown. September 17, 2017.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.