Home GPU Comparison AMD Instinct MI300X vs NVIDIA H100 SXM5 80 GB

AMD Instinct MI300X vs NVIDIA H100 SXM5 80 GB

AI GPU We compared a Professional market GPU: 192GB VRAM AMD Instinct MI300X and a GPU: 80GB VRAM H100 SXM5 80 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

AMD Instinct MI300X 's Advantages
Released 9 months late
Boost Clock has increased by 6% (2100MHz vs 1980MHz)
More VRAM (192GB vs 80GB)
Larger VRAM bandwidth (5300GB/s vs 1681GB/s)
2560 additional rendering cores
NVIDIA H100 SXM5 80 GB 's Advantages
Lower TDP (700W vs 750W)

Score

Benchmark

FP32 (float)
AMD Instinct MI300X +144%
163.4 TFLOPS
H100 SXM5 80 GB
66.91 TFLOPS
VS

Graphics Card

Dec 2023
Release Date
Mar 2023
Instinct
Generation
Tesla Hopper
Professional
Type
AI GPU
PCIe 5.0 x16
Bus Interface
PCIe 5.0 x16

Clock Speeds

1000 MHz
Base Clock
1590 MHz
2100 MHz
Boost Clock
1980 MHz
5200 MHz
Memory Clock
1313 MHz

Memory

192GB
Memory Size
80GB
HBM3
Memory Type
HBM3
8192bit
Memory Bus
5120bit
5300GB/s
Bandwidth
1681GB/s

Render Config

-
SM Count
132
304
Compute Units
-
19456
Shading Units
16896
880
TMUs
528
0
ROPs
24
-
Tensor Cores
528
-
RT Cores
-
16 KB (per CU)
L1 Cache
256 KB (per SM)
16 MB
L2 Cache
50 MB

Theoretical Performance

0 MPixel/s
Pixel Rate
47.52 GPixel/s
1496 GTexel/s
Texture Rate
1045 GTexel/s
1300 TFLOPS
FP16 (half)
267.6 TFLOPS
163.4 TFLOPS
FP32 (float)
66.91 TFLOPS
81.7 TFLOPS
FP64 (double)
33.45 TFLOPS

Graphics Processor

MI300
GPU Name
GH100
-
GPU Variant
-
CDNA 3.0
Architecture
Hopper
TSMC
Foundry
TSMC
5 nm
Process Size
4 nm
146 billion
Transistors
80 billion
1017 mm²
Die Size
814 mm²

Board Design

750W
TDP
700W
1000 W
Suggested PSU
1100 W
No outputs
Outputs
No outputs
None
Power Connectors
8-pin EPS

Graphics Features

N/A
DirectX
N/A
N/A
OpenGL
N/A
3.0
OpenCL
3.0
N/A
Vulkan
N/A
-
CUDA
9.0
N/A
Shader Model
N/A

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy