Home GPU Comparison NVIDIA A10 PCIe vs NVIDIA H100 PCIe

NVIDIA A10 PCIe vs NVIDIA H100 PCIe

AI GPU We compared a Professional market GPU: 24GB VRAM A10 PCIe and a GPU: 80GB VRAM H100 PCIe to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

NVIDIA A10 PCIe 's Advantages
Lower TDP (150W vs 350W)
NVIDIA H100 PCIe 's Advantages
Released 11 months late
Boost Clock has increased by 4% (1755MHz vs 1695MHz)
More VRAM (80GB vs 24GB)
Larger VRAM bandwidth (2039GB/s vs 600.2GB/s)
5376 additional rendering cores

Score

Benchmark

FP32 (float)
A10 PCIe
31.24 TFLOPS
H100 PCIe +63%
51.22 TFLOPS
Blender
A10 PCIe
2505
H100 PCIe +93%
4845
VS

Graphics Card

Apr 2021
Release Date
Mar 2022
Tesla
Generation
Tesla Hopper
Professional
Type
AI GPU
PCIe 4.0 x16
Bus Interface
PCIe 5.0 x16

Clock Speeds

885 MHz
Base Clock
1095 MHz
1695 MHz
Boost Clock
1755 MHz
1563 MHz
Memory Clock
1593 MHz

Memory

24GB
Memory Size
80GB
GDDR6
Memory Type
HBM2e
384bit
Memory Bus
5120bit
600.2GB/s
Bandwidth
2039GB/s

Render Config

72
SM Count
114
-
Compute Units
-
9216
Shading Units
14592
288
TMUs
456
96
ROPs
24
288
Tensor Cores
456
72
RT Cores
-
128 KB (per SM)
L1 Cache
256 KB (per SM)
6 MB
L2 Cache
50 MB

Theoretical Performance

162.7 GPixel/s
Pixel Rate
42.12 GPixel/s
488.2 GTexel/s
Texture Rate
800.3 GTexel/s
31.24 TFLOPS
FP16 (half)
204.9 TFLOPS
31.24 TFLOPS
FP32 (float)
51.22 TFLOPS
976.3 GFLOPS
FP64 (double)
25.61 TFLOPS

Graphics Processor

GA102
GPU Name
GH100
GA102-890-A1
GPU Variant
-
Ampere
Architecture
Hopper
Samsung
Foundry
TSMC
8 nm
Process Size
4 nm
28.3 billion
Transistors
80 billion
628 mm²
Die Size
814 mm²

Board Design

150W
TDP
350W
450 W
Suggested PSU
750 W
No outputs
Outputs
No outputs
1x 8-pin
Power Connectors
1x 16-pin

Graphics Features

12 Ultimate (12_2)
DirectX
N/A
4.6
OpenGL
N/A
3.0
OpenCL
3.0
1.3
Vulkan
N/A
8.6
CUDA
9.0
6.6
Shader Model
N/A

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy