Home GPU Comparison AMD Instinct MI300X vs NVIDIA H100 PCIe 80 GB

AMD Instinct MI300X vs NVIDIA H100 PCIe 80 GB

AI GPU We compared a Professional market GPU: 192GB VRAM AMD Instinct MI300X and a GPU: 80GB VRAM H100 PCIe 80 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

AMD Instinct MI300X 's Advantages
Released 9 months late
Boost Clock has increased by 20% (2100MHz vs 1755MHz)
More VRAM (192GB vs 80GB)
Larger VRAM bandwidth (5300GB/s vs 2039GB/s)
4864 additional rendering cores
NVIDIA H100 PCIe 80 GB 's Advantages
Lower TDP (350W vs 750W)

Score

Benchmark

FP32 (float)
AMD Instinct MI300X +219%
163.4 TFLOPS
H100 PCIe 80 GB
51.22 TFLOPS
VS

Graphics Card

Dec 2023
Release Date
Mar 2023
Instinct
Generation
Tesla Hopper
Professional
Type
AI GPU
PCIe 5.0 x16
Bus Interface
PCIe 5.0 x16

Clock Speeds

1000 MHz
Base Clock
1095 MHz
2100 MHz
Boost Clock
1755 MHz
5200 MHz
Memory Clock
1593 MHz

Memory

192GB
Memory Size
80GB
HBM3
Memory Type
HBM2e
8192bit
Memory Bus
5120bit
5300GB/s
Bandwidth
2039GB/s

Render Config

-
SM Count
114
304
Compute Units
-
19456
Shading Units
14592
880
TMUs
456
0
ROPs
24
-
Tensor Cores
456
-
RT Cores
-
16 KB (per CU)
L1 Cache
256 KB (per SM)
16 MB
L2 Cache
50 MB

Theoretical Performance

0 MPixel/s
Pixel Rate
42.12 GPixel/s
1496 GTexel/s
Texture Rate
800.3 GTexel/s
1300 TFLOPS
FP16 (half)
204.9 TFLOPS
163.4 TFLOPS
FP32 (float)
51.22 TFLOPS
81.7 TFLOPS
FP64 (double)
25.61 TFLOPS

Graphics Processor

MI300
GPU Name
GH100
-
GPU Variant
-
CDNA 3.0
Architecture
Hopper
TSMC
Foundry
TSMC
5 nm
Process Size
4 nm
146 billion
Transistors
80 billion
1017 mm²
Die Size
814 mm²

Board Design

750W
TDP
350W
1000 W
Suggested PSU
750 W
No outputs
Outputs
No outputs
None
Power Connectors
1x 16-pin

Graphics Features

N/A
DirectX
N/A
N/A
OpenGL
N/A
3.0
OpenCL
3.0
N/A
Vulkan
N/A
-
CUDA
9.0
N/A
Shader Model
N/A

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy