According to TrendForce's latest research, revenue skyrocketed to $30 billion in the second quarter of fiscal 2025, driven by heightened market demand for NVIDIA's flagship product, the Hopper GPU. Recent supply chain surveys indicate that cloud service providers and OEMs have significantly increased their demand for the H200, which is expected to become the primary shipment after the third quarter of 2024.
NVIDIA's data center business accounted for nearly 88% of its revenue in the second fiscal quarter of FY2025, with approximately 90% of that revenue coming from the Hopper platforms. These platforms include the H100, H200, H20, and the GH200 solution integrated with Grace CPUs, specifically designed for high-performance computing (HPC) and AI applications. Notably, NVIDIA has maintained a no-price-cut strategy for the H100, which will be replaced by the H200 once the older orders are fulfilled.
As market demand for AI servers equipped with the H200 increases, it will mitigate delays caused by production issues with the new Blackwell platform. This ensures that NVIDIA's data center revenue in the latter half of this fiscal year remains robust. The Blackwell platform is anticipated to be released next year, with TSMC upgrading its CoWoS packaging capacity by late 2025 to approximately 70-80K units, effectively doubling its 2024 capacity. NVIDIA is expected to account for more than half of this capacity.
The H200 is the first GPU featuring 8-layer stacked HBM3E, and upcoming Blackwell series chips are also expected to utilize HBM3E extensively. Micron and SK Hynix completed validation of HBM3E in the first quarter of 2024, with mass shipments commencing in the second quarter. Micron's HBM3E is utilized in the H200, while SK Hynix's HBM3E supports both the H200 and B100. Samsung's HBM3E has also completed validation and begun shipping, with its initial 8-layer stacked product primarily employed for the H200. Validation for the Blackwell series is progressing steadily.