Skip to content

NVIDIA DGX A100 DGX H100 DGX Station A100 Artificial Intelligence/High-performance Computing Server Deep Learning Server GPU Inference Server

$16,800.00$29,800.00

The NVIDIA DGX A100 is a powerful AI supercomputer designed for advanced artificial intelligence, machine learning, and high-performance computing (HPC) workloads. Powered by eight NVIDIA A100 Tensor Core GPUs, it delivers over 5 petaflops of AI performance. The system supports NVIDIA’s Multi-Instance GPU (MIG) technology, allowing efficient resource partitioning for simultaneous tasks. Ideal for deep learning, data analytics, and scientific research, the DGX A100 is built for scalability, enabling organizations to deploy large AI clusters. Its robust performance makes it a top choice for industries like healthcare, finance, and autonomous systems.

SKU: N/A Categories: ,
NVIDIA DGX A100 DGX H100 DGX Station A100 Artificial Intelligence/High-performance Computing Server Deep Learning Server GPU Inference Server

Comparison Table for Nvidia DGX A100, DGX H100, and DGX Station A100

Feature Nvidia DGX A100 Nvidia DGX H100 Nvidia DGX Station A100
Release Year 2020 2022 2020
Purpose AI/ML, HPC, and data analytics Next-gen AI/ML, generative AI, HPC AI/ML development and workstation use
GPUs 8 x Nvidia A100 (Ampere architecture) 8 x Nvidia H100 (Hopper architecture) 4 x Nvidia A100 (Ampere architecture)
GPU Memory 640GB total (80GB per GPU with NVLink) 640GB total (80GB per GPU with NVLink) 320GB total (80GB per GPU with NVLink)
Tensor Cores 4320 4320 2160
FP64 Performance 19.5 TFLOPS 60 TFLOPS 9.7 TFLOPS
FP32 Performance 156 TFLOPS 60 TFLOPS* (FP8 Hybrid mode supported) 78 TFLOPS
FP8 Performance N/A 1.25 PFLOPS N/A
Mixed Precision 5 PFLOPS 6 PFLOPS 2.5 PFLOPS
NVLink Bandwidth 600GB/s 900GB/s 600GB/s
CPU 2 x AMD EPYC 7742 2 x AMD EPYC 7763 1 x AMD EPYC 7742
System Memory 1TB DDR4 2TB DDR5 512GB DDR4
Storage 15TB NVMe SSD 30TB NVMe SSD 7.68TB NVMe SSD
Networking 8 x 200Gbps HDR InfiniBand 8 x 400Gbps HDR InfiniBand 2 x 10Gbps Ethernet
Power Consumption 6.5 kW 10.2 kW 1.5 kW
Form Factor 6U Rackmount 6U Rackmount Desktop Tower
Cooling Air or Liquid-cooled Air or Liquid-cooled Air-cooled
Primary Use Case Data centers, large-scale AI Advanced AI, generative AI, HPC Personal AI/ML workstation
Recommanded little Recommanded Fully Recommanded Very Rcommanded

Key Differences

  1. GPU Architecture:
    • DGX H100 introduces the Hopper architecture with advanced features like FP8 precision, optimized for generative AI and next-gen ML tasks.
    • DGX A100 and DGX Station A100 use the older Ampere architecture.
  2. Performance:
    • DGX H100 provides significant improvements, with 1.25 PFLOPS FP8 and enhanced FP64 performance.
    • DGX A100 offers 5 PFLOPS mixed precision, while DGX Station A100 focuses on smaller-scale tasks with 2.5 PFLOPS mixed precision.
  3. Memory and Bandwidth:
    • DGX H100 leads with 900GB/s NVLink bandwidth and 2TB DDR5 memory, doubling the system memory of the DGX A100.
  4. Networking:
    • DGX H100 features 400Gbps InfiniBand networking, doubling the throughput of DGX A100’s 200Gbps.
  5. Power and Cooling:
    • DGX H100 has higher power consumption (10.2 kW) and is designed for advanced AI tasks in large data centers.
    • DGX Station A100 consumes only 1.5 kW, making it suitable for small teams or workstation environments.
  6. Use Case:
    • DGX H100 is ideal for next-gen generative AI and large-scale HPC applications.
    • DGX A100 suits general AI workloads and enterprise data centers.
    • DGX Station A100 is tailored for personal or small-team AI development.

Recommendations

  • DGX H100: Best for cutting-edge AI research, generative AI, and high-performance computing requiring maximum power and efficiency.
  • DGX A100: Suitable for established data centers handling a mix of AI/ML tasks.
  • DGX Station A100: Perfect for personal AI/ML projects or on-premises development with reduced resource requirements.
Use Cases

healthcare, finance, autonomous vehicles, and scientific research

Key Features

equipped with eight NVIDIA A100 GPUs, each offering up to 40 GB or 80 GB of high-bandwidth HBM2 memory.

NVIDIA DGX Series

NVIDIA DGX A100, NVIDIA DGX H100 80gb, NVIDIA DGX Station A100

1 review for NVIDIA DGX A100 DGX H100 DGX Station A100 Artificial Intelligence/High-performance Computing Server Deep Learning Server GPU Inference Server

  1. Rodica

    Love this so much!

Add a review

Your email address will not be published. Required fields are marked *