Home GPU Comparison AMD Instinct MI300A vs NVIDIA Tesla V100 PCIe 32 GB

AMD Instinct MI300A vs NVIDIA Tesla V100 PCIe 32 GB

We compared two Professional market GPUs: 128GB VRAM AMD Instinct MI300A and 32GB VRAM Tesla V100 PCIe 32 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

AMD Instinct MI300A 's Advantages
Released 5 years and 9 months late
Boost Clock has increased by 52% (2100MHz vs 1380MHz)
More VRAM (128GB vs 32GB)
Larger VRAM bandwidth (5300GB/s vs 897.0GB/s)
9472 additional rendering cores
NVIDIA Tesla V100 PCIe 32 GB 's Advantages
Lower TDP (250W vs 760W)

Score

Benchmark

FP32 (float)
AMD Instinct MI300A +767%
122.6 TFLOPS
Tesla V100 PCIe 32 GB
14.13 TFLOPS
VS

Graphics Card

Dec 2023
Release Date
Mar 2018
Instinct
Generation
Tesla
Professional
Type
Professional
PCIe 5.0 x16
Bus Interface
PCIe 3.0 x16

Clock Speeds

1000 MHz
Base Clock
1230 MHz
2100 MHz
Boost Clock
1380 MHz
5200 MHz
Memory Clock
876 MHz

Memory

128GB
Memory Size
32GB
HBM3
Memory Type
HBM2
8192bit
Memory Bus
4096bit
5300GB/s
Bandwidth
897.0GB/s

Render Config

-
SM Count
80
228
Compute Units
-
14592
Shading Units
5120
880
TMUs
320
0
ROPs
128
-
Tensor Cores
640
-
RT Cores
-
16 KB (per CU)
L1 Cache
128 KB (per SM)
16 MB
L2 Cache
6 MB

Theoretical Performance

0 MPixel/s
Pixel Rate
176.6 GPixel/s
1496 GTexel/s
Texture Rate
441.6 GTexel/s
980.6 TFLOPS
FP16 (half)
28.26 TFLOPS
122.6 TFLOPS
FP32 (float)
14.13 TFLOPS
61.3 TFLOPS
FP64 (double)
7.066 TFLOPS

Graphics Processor

MI300
GPU Name
GV100
-
GPU Variant
-
CDNA 3.0
Architecture
Volta
TSMC
Foundry
TSMC
5 nm
Process Size
12 nm
146 billion
Transistors
21.1 billion
1017 mm²
Die Size
815 mm²

Board Design

760W
TDP
250W
1000 W
Suggested PSU
600 W
No outputs
Outputs
No outputs
None
Power Connectors
2x 8-pin

Graphics Features

N/A
DirectX
12 (12_1)
N/A
OpenGL
4.6
3.0
OpenCL
3.0
N/A
Vulkan
1.3
-
CUDA
7.0
N/A
Shader Model
6.6

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy