Oracle has announced the availability of several GPU compute clusters designed to deliver AI training services via Oracle's cloud infrastructure, with the most powerful cluster featuring over 100,000 NVIDIA Blackwell GPUs.
It utilizes a total of up to 131,072,000 B200 GPU accelerator cards, reaching peak FP8 floating-point and INT8 integer performance of up to 2.4 ZFlops, or 24 trillion operations per second.
The base nodes are NVIDIA GB200 NVL72 liquid-cooled enclosures, each housing 72 built-in GPU accelerator cards. These enclosures are interconnected using the 129.6 TB/s bandwidth NVLink bus.
Despite the impressive number of accelerator cards and peak performance, this setup has yet to surpass the Mask. However, it remains an exciting announcement as Oracle claims NVIDIA will not deliver Blackwell GPUs in large quantities until the first half of next year. There is no specific launch date for this massive cluster yet.
A second cluster is equipped with 16,384 NVIDIA H100 GPUs, boasting FP8/INT8 peak performance of 65 PFlops (650 million operations per second) and a total bandwidth throughput of 13Pbps.
The third cluster features 65,536 NVIDIA H200 GPUs, delivering FP8/INT8 peak performance of 260 EFlops (2.6 trillion operations per second) and total bandwidth throughput of 52Pbps. This cluster is expected to go live later this year.
Organizations such as WideLabs and Zoom have already begun adopting Oracle's new clustering services.