Home GPU Comparison NVIDIA A100 SXM4 40 GB vs NVIDIA Tesla V100 SXM3 32 GB

NVIDIA A100 SXM4 40 GB vs NVIDIA Tesla V100 SXM3 32 GB

AI GPU We compared a GPU: 40GB VRAM A100 SXM4 40 GB and a Professional market GPU: 32GB VRAM Tesla V100 SXM3 32 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

NVIDIA A100 SXM4 40 GB 's Advantages
Released 2 years and 2 months late
More VRAM (40GB vs 32GB)
Larger VRAM bandwidth (1555GB/s vs 897.0GB/s)
1792 additional rendering cores
NVIDIA Tesla V100 SXM3 32 GB 's Advantages
Boost Clock has increased by 9% (1530MHz vs 1410MHz)
Lower TDP (250W vs 400W)

Score

Benchmark

FP32 (float)
A100 SXM4 40 GB +24%
19.49 TFLOPS
Tesla V100 SXM3 32 GB
15.67 TFLOPS
VS

Graphics Card

May 2020
Release Date
Mar 2018
Tesla Ampere
Generation
Tesla
AI GPU
Type
Professional
PCIe 4.0 x16
Bus Interface
PCIe 3.0 x16

Clock Speeds

1095 MHz
Base Clock
1290 MHz
1410 MHz
Boost Clock
1530 MHz
1215 MHz
Memory Clock
876 MHz

Memory

40GB
Memory Size
32GB
HBM2e
Memory Type
HBM2
5120bit
Memory Bus
4096bit
1555GB/s
Bandwidth
897.0GB/s

Render Config

108
SM Count
80
-
Compute Units
-
6912
Shading Units
5120
432
TMUs
320
160
ROPs
128
432
Tensor Cores
640
-
RT Cores
-
192 KB (per SM)
L1 Cache
128 KB (per SM)
40 MB
L2 Cache
6 MB

Theoretical Performance

225.6 GPixel/s
Pixel Rate
195.8 GPixel/s
609.1 GTexel/s
Texture Rate
489.6 GTexel/s
77.97 TFLOPS
FP16 (half)
31.33 TFLOPS
19.49 TFLOPS
FP32 (float)
15.67 TFLOPS
9.746 TFLOPS
FP64 (double)
7.834 TFLOPS

Graphics Processor

GA100
GPU Name
GV100
-
GPU Variant
-
Ampere
Architecture
Volta
TSMC
Foundry
TSMC
7 nm
Process Size
12 nm
54.2 billion
Transistors
21.1 billion
826 mm²
Die Size
815 mm²

Board Design

400W
TDP
250W
800 W
Suggested PSU
600 W
No outputs
Outputs
No outputs
None
Power Connectors
None

Graphics Features

N/A
DirectX
12 (12_1)
N/A
OpenGL
4.6
3.0
OpenCL
3.0
N/A
Vulkan
1.3
8.0
CUDA
7.0
N/A
Shader Model
6.6

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy