CoreWeave is the first cloud company to deploy Nvidia’s new Blackwell Ultra (GB200 NVL72) GPUs in production. The rollout is supported by Dell-built, liquid-cooled servers, making CoreWeave the first public cloud to offer this advanced hardware.
Each NVL72 unit includes:
The system architecture is rack-mounted and designed for large-scale AI workloads such as LLM training and inferencing.
Dell manufactured the custom servers used in this deployment. According to Dell, this partnership builds on a 30+ year relationship with Nvidia and marks Dell’s deeper push into AI infrastructure. Dell’s AI server business is already generating $10 billion in annual revenue, with 50% growth expected in 2025.
Nvidia owns a 7% stake in CoreWeave and is closely aligned with the company as part of its effort to grow AI compute outside of traditional hyperscalers. This deployment gives CoreWeave a head start in the competitive “neocloud” market, alongside peers like Lambda and Voltron Data.
Each rack with the GB200 NVL72 system is estimated to cost up to $3.8 million. Despite the high price point, Nvidia expects demand to grow, forecasting shipments to rise from 5.3 million to 6 million units in 2026.
CoreWeave plans to expand the deployment to multiple U.S. data centers later in 2025. The company says this will help customers train larger AI models with faster turnaround, improved energy efficiency, and lower total cost of ownership.
Be the first to post comment!