Home GPU Comparison AMD Radeon Instinct MI8 vs NVIDIA A100 SXM4 40 GB

AMD Radeon Instinct MI8 vs NVIDIA A100 SXM4 40 GB

AI GPU We compared a Professional market GPU: 4GB VRAM Radeon Instinct MI8 and a GPU: 40GB VRAM A100 SXM4 40 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

AMD Radeon Instinct MI8 's Advantages
Lower TDP (175W vs 400W)
NVIDIA A100 SXM4 40 GB 's Advantages
Released 3 years and 5 months late
Boost Clock1410MHz
More VRAM (40GB vs 4GB)
Larger VRAM bandwidth (1555GB/s vs 512.0GB/s)
2816 additional rendering cores

Score

Benchmark

FP32 (float)
Radeon Instinct MI8
8.192 TFLOPS
A100 SXM4 40 GB +137%
19.49 TFLOPS
VS

Graphics Card

Dec 2016
Release Date
May 2020
Radeon Instinct
Generation
Tesla Ampere
Professional
Type
AI GPU
PCIe 3.0 x16
Bus Interface
PCIe 4.0 x16

Clock Speeds

-
Base Clock
1095 MHz
-
Boost Clock
1410 MHz
500 MHz
Memory Clock
1215 MHz

Memory

4GB
Memory Size
40GB
HBM
Memory Type
HBM2e
4096bit
Memory Bus
5120bit
512.0GB/s
Bandwidth
1555GB/s

Render Config

-
SM Count
108
64
Compute Units
-
4096
Shading Units
6912
256
TMUs
432
64
ROPs
160
-
Tensor Cores
432
-
RT Cores
-
16 KB (per CU)
L1 Cache
192 KB (per SM)
2 MB
L2 Cache
40 MB

Theoretical Performance

64.00 GPixel/s
Pixel Rate
225.6 GPixel/s
256.0 GTexel/s
Texture Rate
609.1 GTexel/s
8.192 TFLOPS
FP16 (half)
77.97 TFLOPS
8.192 TFLOPS
FP32 (float)
19.49 TFLOPS
512.0 GFLOPS
FP64 (double)
9.746 TFLOPS

Graphics Processor

Fiji
GPU Name
GA100
Fiji XT CA (215-0862120)
GPU Variant
-
GCN 3.0
Architecture
Ampere
TSMC
Foundry
TSMC
28 nm
Process Size
7 nm
8.9 billion
Transistors
54.2 billion
596 mm²
Die Size
826 mm²

Board Design

175W
TDP
400W
450 W
Suggested PSU
800 W
No outputs
Outputs
No outputs
1x 8-pin
Power Connectors
None

Graphics Features

12 (12_0)
DirectX
N/A
4.6
OpenGL
N/A
2.1
OpenCL
3.0
1.2.170
Vulkan
N/A
-
CUDA
8.0
6.5
Shader Model
N/A

Related GPU Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy