Home GPU Comparison AMD Instinct MI300A vs NVIDIA H100 SXM5 80 GB

AMD Instinct MI300A vs NVIDIA H100 SXM5 80 GB

AI GPU We compared a Professional market GPU: 128GB VRAM AMD Instinct MI300A and a GPU: 80GB VRAM H100 SXM5 80 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

AMD Instinct MI300A 's Advantages
Released 9 months late
Boost Clock has increased by 6% (2100MHz vs 1980MHz)
More VRAM (128GB vs 80GB)
Larger VRAM bandwidth (5300GB/s vs 1681GB/s)
NVIDIA H100 SXM5 80 GB 's Advantages
2304 additional rendering cores
Lower TDP (700W vs 760W)

Score

Benchmark

FP32 (float)
AMD Instinct MI300A +83%
122.6 TFLOPS
H100 SXM5 80 GB
66.91 TFLOPS
VS

Graphics Card

Dec 2023
Release Date
Mar 2023
Instinct
Generation
Tesla Hopper
Professional
Type
AI GPU
PCIe 5.0 x16
Bus Interface
PCIe 5.0 x16

Clock Speeds

1000 MHz
Base Clock
1590 MHz
2100 MHz
Boost Clock
1980 MHz
5200 MHz
Memory Clock
1313 MHz

Memory

128GB
Memory Size
80GB
HBM3
Memory Type
HBM3
8192bit
Memory Bus
5120bit
5300GB/s
Bandwidth
1681GB/s

Render Config

-
SM Count
132
228
Compute Units
-
14592
Shading Units
16896
880
TMUs
528
0
ROPs
24
-
Tensor Cores
528
-
RT Cores
-
16 KB (per CU)
L1 Cache
256 KB (per SM)
16 MB
L2 Cache
50 MB

Theoretical Performance

0 MPixel/s
Pixel Rate
47.52 GPixel/s
1496 GTexel/s
Texture Rate
1045 GTexel/s
980.6 TFLOPS
FP16 (half)
267.6 TFLOPS
122.6 TFLOPS
FP32 (float)
66.91 TFLOPS
61.3 TFLOPS
FP64 (double)
33.45 TFLOPS

Graphics Processor

MI300
GPU Name
GH100
-
GPU Variant
-
CDNA 3.0
Architecture
Hopper
TSMC
Foundry
TSMC
5 nm
Process Size
4 nm
146 billion
Transistors
80 billion
1017 mm²
Die Size
814 mm²

Board Design

760W
TDP
700W
1000 W
Suggested PSU
1100 W
No outputs
Outputs
No outputs
None
Power Connectors
8-pin EPS

Graphics Features

N/A
DirectX
N/A
N/A
OpenGL
N/A
3.0
OpenCL
3.0
N/A
Vulkan
N/A
-
CUDA
9.0
N/A
Shader Model
N/A

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy