Home GPU Comparison NVIDIA H100 PCIe vs NVIDIA L4

NVIDIA H100 PCIe vs NVIDIA L4

AI GPU We compared a GPU: 80GB VRAM H100 PCIe and a Professional market GPU: 24GB VRAM L4 to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

NVIDIA H100 PCIe 's Advantages
More VRAM (80GB vs 24GB)
Larger VRAM bandwidth (2039GB/s vs 300.1GB/s)
7168 additional rendering cores
NVIDIA L4 's Advantages
Released 1 years late
Boost Clock has increased by 16% (2040MHz vs 1755MHz)
Lower TDP (72W vs 350W)

Score

Benchmark

FP32 (float)
H100 PCIe +69%
51.22 TFLOPS
L4
30.29 TFLOPS
VS
L4

Graphics Card

Mar 2022
Release Date
Mar 2023
Tesla Hopper
Generation
Tesla Ada
AI GPU
Type
Professional
PCIe 5.0 x16
Bus Interface
PCIe 4.0 x16

Clock Speeds

1095 MHz
Base Clock
795 MHz
1755 MHz
Boost Clock
2040 MHz
1593 MHz
Memory Clock
1563 MHz

Memory

80GB
Memory Size
24GB
HBM2e
Memory Type
GDDR6
5120bit
Memory Bus
192bit
2039GB/s
Bandwidth
300.1GB/s

Render Config

114
SM Count
60
-
Compute Units
-
14592
Shading Units
7424
456
TMUs
240
24
ROPs
80
456
Tensor Cores
240
-
RT Cores
60
256 KB (per SM)
L1 Cache
128 KB (per SM)
50 MB
L2 Cache
48 MB

Theoretical Performance

42.12 GPixel/s
Pixel Rate
163.2 GPixel/s
800.3 GTexel/s
Texture Rate
489.6 GTexel/s
204.9 TFLOPS
FP16 (half)
30.29 TFLOPS
51.22 TFLOPS
FP32 (float)
30.29 TFLOPS
25.61 TFLOPS
FP64 (double)
473.3 GFLOPS

Graphics Processor

GH100
GPU Name
AD104
-
GPU Variant
AD104-???-A1
Hopper
Architecture
Ada Lovelace
TSMC
Foundry
TSMC
4 nm
Process Size
5 nm
80 billion
Transistors
35.8 billion
814 mm²
Die Size
294 mm²

Board Design

350W
TDP
72W
750 W
Suggested PSU
250 W
No outputs
Outputs
No outputs
1x 16-pin
Power Connectors
1x 16-pin

Graphics Features

N/A
DirectX
12 Ultimate (12_2)
N/A
OpenGL
4.6
3.0
OpenCL
3.0
N/A
Vulkan
1.3
9.0
CUDA
8.9
N/A
Shader Model
6.7

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy